Plus, Turing also showed that achieving universality doesn’t require anything fancy. The basic equipment of a universal machine is just not more advanced than a kid’s abacus — operations like incrementing, decrementing, and conditional jumping are all it takes to create software of any complexity: be it a calculator, Minecraft, or an AI chatbot.
Likewise, consciousness might just be an emergent property of the software running AGI, much like how the hardware of a universal machine gives rise to its capabilities. Personally, I don’t buy into the idea of something sitting on top of the physical human brain — no immortal soul or astral “I” floating around in higher dimensions. It’s all just flesh and bone. Think of it like an anthill: this incredibly complex system doesn’t need some divine spirit to explain its organized society, impressive architecture, or mushroom farms. The anthill’s intricate behaviour, often referred to as a superorganism, emerges from the interactions of its individual ants without needing to be reduced to them. Similarly, a single ant wandering around in a terrarium won’t tell you much about the anthill as a whole. Brain neurons are like those ants — pretty dumb on their own, but get around 86 billion of them together, and suddenly you’ve got “I” with all its experiences, dreams, and… consciousness.
So basically, if something can think, it can also think about itself. That means consciousness is a natural part of thinking — it just comes with the territory. And if you think about it, this also means you can’t really have thinking without consciousness, which brings us back to the whole Skynet thing.