- [The sound of inevitability \| My place to put things](https://tomrenner.com/posts/llm-inevitabilism/)
- [Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc) - YouTube](https://www.youtube.com/watch?v=ZSRHeXYDLko&t=3563s)
---
This philosophical talk by Jonathan Blow from 2019 got me to pause and think more fundamentally about AI.
He mentioned that abstractions are good, we are smart, we are saving effort! But at the same time, with an abstraction, we are losing the capability (!).
It seems this is applicable to AI. Martin Fowler wrote that AI is another level of abstraction. I think that's true. Soon we might not need to understand the underlying technology, the AI will create a product out of a spec. The spec is the new abstraction. That is great, we are saving effort. At the same time, we are loosing the capability to write the systems ourselves. Jonathan showed in the talk that this might not be that far-fetched. Our civilization, it turns out, has lost knowledge before. The process can be slow and hard to notice.
He also showed an example of an older technical guy who was a brutally efficient coder. His "secret" was that he knew the underlying things, the things behind the abstractions. So he was not limited by the abstractions. I kept thinking how to go about technology in the rise of AI. The most smart people I know in the tech industry are the ones that know the details behind the abstractions; they are the innovators, the doers.