When They Can Deal With Not Knowing
Most onboarding looks successful.
New testers complete tasks.
They run test cases.
They close bugs.
Everything appears to work.
The failure shows up later.
The moment something unclear happens and the team slows down because nobody knows where to start.
People wait:
for test cases
for steps
for someone to decide what matters
That delay isn’t a skill issue.
It’s an onboarding outcome.
People were trained to follow structure, not act without it.
I don’t onboard that way.
I onboard people into real problems where structure is missing.
I don’t start with the product
I start with the person.
Not formal interviews. Just direct questions:
• What have you tested before?
• What do you do when you don’t understand a system?
• What usually confuses you?
• Where do you hesitate?
The answers are rarely technical.
One engineer told me:
“If there’s no test case, I don’t know where to begin.”
Another said:
“I avoid APIs unless someone guides me.”
That’s not just skill.
That’s dependency.
From there, I adjust:
some need constraint
some need freedom
All need to face situations where the next step isn’t obvious.
I give them real problems, not safe ones
No sandbox.
No fake bugs.
Something like:
“This endpoint has been unstable since the last release.
Figure out what’s going on.”
No steps.
No hints.
Most people don’t start testing.
They pause.
They look for structure that isn’t there.
They search for test cases that don’t exist.
That hesitation matters.
So I ask:
“If nobody helped you, what would you try?”
Some change inputs.
Some check logs.
Some compare UI and API.
Some retry without direction.
Some actions help. Some don’t.
What matters is how they proceed when nothing is defined.
This is where thinking starts forming
Testing is not following instructions.
It is learning about a system when things are unclear.
I introduce simple prompts:
• What inputs could break this?
• Where does this data go next?
• What happens when values are extreme or missing?
• Where could this fail without anyone noticing?
Not as rules.
As ways to look deeper.
For example, one engineer was testing a profile update API.
They tried valid inputs. Everything worked.
So I asked:
“What happens if age is -1?”
They tried it.
The API accepted it.
No error. No validation.
We followed the data:
profile service stored it
analytics pipeline consumed it
reports showed invalid demographics
Now the issue wasn’t “missing validation.”
It was:
bad data quietly spreading across systems
Same endpoint.
Different understanding.
After each session, I ask:
“What would you try first if this was a new system?”
Over time, answers change:
from actions to reasoning
from steps to patterns
A moment that changes how they see risk
A new engineer once tested a payment flow and said:
“Works fine.”
I asked:
“Who gets hurt if this is wrong?”
We walked through it:
double charges
partial refunds
delayed confirmations
wrong currency handling
retries creating duplicate transactions
You could see the shift happen.
The next time they came back with:
“What if confirmation times out, but payment succeeds?”
That shift matters more than any checklist.
Where this breaks (and how I adjust)
This approach doesn’t always land cleanly.
One engineer struggled for days.
They kept asking:
“What exactly should I test?”
“Which cases do you want?”
“Is this enough?”
At one point they said:
“I feel like I’m doing nothing right.”
The problem wasn’t lack of effort.
They couldn’t move without direction.
So I adjusted:
narrowed the problem
increased feedback
made my thinking visible
They didn’t need easier work.
They needed help seeing how to move through it.
They join real decisions early
Not after onboarding. During it.
Bug triage. Scope discussions. Release decisions.
At first, they stay quiet.
So I ask:
• Which issue worries you most?
• Which one could affect the most users?
• What feels unclear here?
Sometimes they’re wrong.
That’s useful.
We break it down:
why some issues matter more
how timing changes impact
why business flow matters more than surface behavior
That’s how judgment develops.
Reflection is where patterns become visible
Every engineer tracks:
• what confused them
• what they missed
• what surprised them
• what they’d try next time
But the value isn’t in writing it down.
It’s in what we notice.
One engineer kept missing data flow issues.
They focused only on UI behavior.
We saw the pattern after a few sessions.
So the next task forced them to trace requests across services.
Same product.
Different focus.
That’s when improvement became visible.
How I know it’s working
Not when tasks are completed.
When behavior changes.
• They ask better questions before testing
• They connect bugs to user impact
• They notice failures that don’t show clearly
• They explain risk, not just behavior
• They challenge assumptions in discussions
That’s the signal.
Not knowledge.
Not tools.
Thinking.
Final words
Most onboarding prepares people for clear situations.
Real work isn’t clear.
So when something unexpected happens, teams slow down.
People wait.
Not because they lack ability.
Because they were trained to depend on structure.
You don’t see the difference during onboarding.
You see it the first time something breaks and nobody tells them what to do.
Some people wait.
Some people start.
Onboarding decides which one you get.
Further Reading
If you found this useful, you might also enjoy these articles:
If you found this helpful, stay connected with Life of QA for more real-world testing experiences, tips, and lessons from the journey!









