

Which is exactly my point. A biological brain, human or otherwise, is incredibly efficient for what it does. It’s also effectively infinitely parallel which is impossible to do with the current tech.
In order to even attempt or approach a system that could be remotely considered “conscious” we would need something that is way more efficient just because of logistics. What they are trying to do with the current hardware has basically reached the practical maximum of scalability.
Hardware footprint and power are massive constraints. The current data centers can’t even run at full capacity because the power grid cannot supply enough power to, and what they are using is driving energy costs up for everyone. On top of that, a bio brain is way more dense. We would need absurd orders of magnitude more hardware to come close with the current tech.
And then there is the software. Nerual nets are a dumbed down model of how brains work, but it is very simplified. Part of that simplification are static weights. The models do not update themselves during execution because they would very quickly muck up the weights from training and basically produce nonsense. They don’t have feedback mechanisms. We train them on one thing. That’s it.
In the case of LLMs, they are trained on the structure of language. We can’t train meaning because that requires unimaginable orders of magnitude more complexity to even attempt.
If AGI or artificial sentience is possible it will never be done with the current tech. I would argue the bubble has likely set AI research back decades because of how short sighted and hamfisted companies are pushing it has soured public perception.
At the rate things have been going, we are certainly headed that way and we get there by accepting this kind unnecessary privacy violations.