“The point is, GPT-2 has faculties. It has specific skills, that require a certain precision of thought, like counting from one to five, or mapping a word to its acronym, or writing poetry. These faculties are untaught; they arise naturally from its pattern-recognition and word-prediction ability. All these deep understanding things that humans have, like Reason and so on, those are faculties. AIs don’t have them yet. But they can learn.”
“The version that proves P != NP will still just be a brute-force pattern-matcher blending things it’s seen and regurgitating them in a different pattern. The proof won’t reveal that the AI’s not doing that; it will just reveal that once you reach a rarefied enough level of that kind of thing, that’s what intelligence is. I’m not trying to play up GPT-2 or say it’s doing anything more than anyone else thinks it’s doing. I’m trying to play down humans. We’re not that great. GPT-2-like processes are closer to the sorts of things we do than we would like to think.”
SSC argues the GPT-2 has general abilities in the machine-readable realm of text, language i reality, GPT-2 show a way a general intelligence could be developed in the written world
Ben Evans writes how computational photography needs be plugged into narrow AI , a “sensing” AGI is surely harder and needs sensing systems be developed in the machine realm
Virtual reality modelling, think also AI playing major videogames, should be a tool to train AI to a physical world without having to resort to phisycal sensors and a body. Will it be different ?