Czekając na AGI… doceńmy ludzką entropię
Czekając dosłownie — bo Claude Sonnet młóci właśnie zadanie 31, fazy 4/9 planu implementacji na podstawie specyfikacji (GitHub spec-kit), a human-in-the-loop (czyli ja) się nudzi — słucham podcastu Dwarkesha z Andrejem Karpathy'm: „AGI is still a decade away" (jeśli nie więcej…). Mądrego warto posłuchać. Nagranie — jak na temat i tempo zmian — jest już prehistoryczne: z października 2025 r. Zapisuję sobie cytaty pod włos:
You're not getting the richness and the diversity and the entropy from these models as you would get from humans. Humans are a lot noisier, but at least they're not biased, in a statistical sense. They're not silently collapsed. They maintain a huge amount of entropy. [...]
Humans collapse during the course of their lives. This is why children, they haven't overfit yet. They will say stuff that will shock you because you can see where they're coming from, but it's just not the thing people say, because they're not yet collapsed. But we're collapsed. We end up revisiting the same thoughts. We end up saying more and more of the same stuff, and the learning rates go down, and the collapse continues to get worse, and then everything deteriorates. [...]
You always have to seek entropy in your life. Talking to other people is a great source of entropy, and things like that.
Okej… zdradzę, że Claude wykonuje właśnie ćwiczenie wskrzeszenia mojego od lat martwego bloga, którego podtytuł brzmiał: „Don Quixote fighting entropy". Z entropią mam na pieńku. Ale entropia to nie tylko poranny ból w kościach — przeciwnie: ekspozycja na entropię może zwiększyć zdolności adaptacyjne i „odmłodzić" mózg.
Dalej rozmowa skręca na obecnego konika Karpathy'ego — edukację:
I guess what was fascinating to me was, I think I had a really good tutor, but just thinking through what this tutor was doing for me and how incredible that experience was and how high the bar is for what I want to build eventually. Instantly from a very short conversation, she understood where I am as a student, what I know and don't know. She was able to probe exactly the kinds of questions or things to understand my world model. No LLM will do that for you 100% right now, not even close. But a tutor will do that if they're good. Once she understands, she really served me all the things that I needed at my current sliver of capability. I need to be always appropriately challenged. I can't be faced with something too hard or too trivial, and a tutor is really good at serving you just the right stuff.
Człowiek–AI: 2:0. Ale myśl, która za tym idzie, jest głębsza. Karpathy twierdzi, że dzięki AI możemy próbować osiągnąć poziom wspomnianej nauczycielki języka koreańskiego:
Earlier on, I built CS231n at Stanford, which I think was the first deep learning class at Stanford, which became very popular. The difference in building out 231n then and LLM101N now is quite stark. I feel really empowered by the LLMs as they exist right now, but I'm very much in the loop. They're helping me build the materials, I go much faster. They're doing a lot of the boring stuff, etc. I feel like I'm developing the course much faster, and it's LLM-infused, but it's not yet at a place where it can creatively create the content. I'm still there to do that. The trickiness is always calibrating yourself to what exists.
Okej, AI skończyło — pora, żeby człowiek wziął się do roboty. Polecam cały wywiad. A analogia edukacji do „siłki" — pierwsza klasa.
Pierwotnie opublikowano na LinkedIn.