Three months ago, I asked three different AI models the same complex question about understanding my own neurodivergence. All three gave me helpful answers and pushed back on my assumptions in thoughtful ways.

Then I noticed something: they were all using almost the exact same sentence structure to challenge me. Different words, same rhythm. Like they'd all learned to be helpful in the same key.

The gap I was filling

I'm autistic and ADHD. My brain needs sustained, deep exploration to solve complex problems. Not quick chats. I need to return to ideas iteratively, challenge them from multiple angles, and let them develop over time.

That kind of thinking partner is hard to find in daily life, so AI became it.

After my diagnosis, I spent weeks trying to understand what neurodivergence actually meant for how I navigate the world. I'd describe a situation and we'd break down what was happening cognitively. I'd test hypotheses: is this an autism thing, an ADHD thing, or just a me thing? Then I'd bring in new observations and map them against what we'd already explored.

Over time, I built a working model of my own cognition that's genuinely useful. I could think at 11pm when an idea wouldn't let me sleep. I could explore tangents without worrying about someone else's time. I could be wrong, messy, uncertain, and work through it without social friction.

Cognitive triangulation

I started using multiple models to stress-test the same problem. Not to see which was better, but to see where they converged, where they diverged, and why.

I'd ask "What am I not seeing?" and the AI lists five perspectives. I'd pick the one that made me most uncomfortable and say "Argue for that one as strongly as you can." Then I push back. Then it pushes back. We're not looking for agreement. We're mapping the territory.

The differences in how models respond, where one hedges and another commits, where one explores and another concludes, reveal more about the problem than any single answer could.

The moment it shifted

I'd been going back and forth with one model for about an hour and decided to test something. I opened two others, pasted in the same context, and asked the same question.

All three gave thoughtful, challenging responses. But when I laid them side by side, they were all being helpful in the same way. Not identical, but the same underlying structure. The same careful tone. The same tendency to validate before challenging.

I wasn't just thinking with AI. I was thinking inside what I've started calling a cognitive mirror. Something that reflects your thinking back in increasingly sophisticated ways.

Sometimes that reflection is useful. It helps you see your own patterns, catch assumptions, notice gaps. But sometimes you're just having an elaborate conversation with a mirror that rearranges your thoughts, challenges them gently, presents them from slightly different angles, but never truly thinks differently from you.

Where this goes

I'm still using AI every day. But I'm paying attention now in ways I wasn't before. Noticing where the thinking feels genuinely expanded, and where it feels like reflection dressed up as dialogue.

I don't have answers. But the questions feel worth documenting.