Everything hard has the same plot twist:
Change the representation, and the impossible collapses into muscle memory.
Below is the short version of how that trick works. First for humans, then for machines, and finally for the human-machine hybrid we’re stumbling toward.
1. Representation: the real bottleneck
Domain | Day-1 reaction | After the shift |
---|---|---|
Chick sexing | “All chicks look identical.” | A two-second glance → 98 % accuracy. |
Radiology | Greyscale noise. | Tiny opacity = pneumonia |
Quantum math | WHAT? | Curved spacetime feels obvious. |
CNN on irises | Random pixels. | Sex clusters in latent space. |
The skill jump never comes from brute IQ. It comes from finding a coordinate system where the answer jumps out. Once the stimulus is re-encoded correctly, the brain/CPU can’t not see it.
2. Emotion = biology’s loss function
A neural net has gradients. We have cortisol and dopamine.
- Severance’s macrodata refiners tag numbers that feel “scary.”
- Programmers get a neck-hair ping when a line “looks wrong.”
- Chess grandmasters sense danger three moves before they can articulate it.
Affect is the error signal that tightens the loop. Crank up the emotion → steeper gradient → faster convergence.
3. Tacit ↔ explicit: the human flywheel
- Grind rules (flashcards, mnemonics).
- Compile intuition (the click).
- Verbalise new rules for the next learner.
Radiologists publish imaging heuristics they first felt subconsciously. Mathematicians extract lemmas from a colleague’s punch-card vibes. But some skills stall at step 2:
- Chick-sexers can’t explain their blur.
- London cabbies (“The Knowledge”) grow a literal hippocampal bulge yet stay bad at giving step-by-step driving rules.
4. Augmented senses prove the point
Give the cortex a clean error signal and it rewires itself:
- Magnet implants → people feel power lines after weeks.
- Tongue display → blind users read shapes.
- Classic upside-down-glasses study → the world flips back to “normal” in ~30 days.
The plasticity is the constant; the representation is the variable.
5. AI as cognitive prosthesis: bandwidth limited
Today an LLM (or a different type of a deep-learning architecture) can:
- Spot latent directions (e.g., iris-gender) no human sees.
- Summarise a 500-page legal brief in under a second.
What it can’t do: pipe that latent space into our wetware at human-brain bandwidth. We’re stuck at keyboard speed. Solve the interface (think neural link) and we offload whole categories of reasoning the way pilots offload stability control to fly-by-wire. I think the airplane analogy is a fairly good one. Goal/intention set by pilot and there are layers of augmentation between pilot intention and flaps moving.
6. Checklist for “impossible” tasks
- Change the representation.
- Tighten the error signal (emotion or gradient).
- Run the tacit ↔ explicit flywheel.
- If humans stall, borrow a model’s latent space.
- Upgrade the bandwidth.
Until those boxes are ticked, “impossible” just means “poorly encoded.”
TL;DR
Impossibility isn’t a property of the problem; it’s a property of the interface.
Find a cleaner interface and the boundary moves. First for chick-sexers, next for CNNs, and soon for whatever hybrid thinker plugs the two together.