of gods, civilizations and 1-million mark

““In almost every world region for which we have data, moralizing gods tended to follow, not precede, increases in social complexity.” The mark seems 1 million people
Read more: https://www.smithsonianmag.com/smart-news/which-came-first-vengeful-gods-or-complex-civilizations-180971781/#ugK1Ooif9YevT6Ru.99

in Gurren Lagann more vengeful than moralizing once 1 million people lived on the surface

how does Göbekli Tepe fit in this? 9,500 BCE nomad people probably become sedentary because of this religious place, than buried it 500 years later. Maybe Gurren Lagann thing

good old article on the age of machines

https://www.lrb.co.uk/v37/n05/john-lanchester/the-robots-are-coming

“A thorough, considered and disconcerting study of that possibility was undertaken by two Oxford economists, Carl Benedikt Frey and Michael Osborne, in a paper from 2013 called ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’

review of

-Tyler Cowen The End of Average

-Second Machine Age Bjodhebfhwe etc etc

productivity, employment and robot deflation are dealt with “It says a lot about the current moment that as we stand facing a future which might resemble either a hyper-capitalist dystopia or a socialist paradise, the second option doesn’t get a mention”

Computation vs. domain knowledge in AI

the Bitter Lesson of Rich Sutton says that progress in 70 years of Ai has come from general methods relying on computation power which has increased from Moore Law. Any method based on developing specific knowledge of the field has failed, while “brute force”

the Better Lesson of Rodney Brooks retorts that Moore Law might yield doubling every 20 years, Dennard scaling (energy invariance) broke down in 2006 so there is not so much computing power to waste and reserach is very active on network architectures and custom-designed chips.

In a way both are right, speaking of different levels of generalizations. Sutton:

“In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

Server costs, cloud is better

AI with faculties

“The point is, GPT-2 has faculties. It has specific skills, that require a certain precision of thought, like counting from one to five, or mapping a word to its acronym, or writing poetry. These faculties are untaught; they arise naturally from its pattern-recognition and word-prediction ability. All these deep understanding things that humans have, like Reason and so on, those are faculties. AIs don’t have them yet. But they can learn.”

“The version that proves P != NP will still just be a brute-force pattern-matcher blending things it’s seen and regurgitating them in a different pattern. The proof won’t reveal that the AI’s not doing that; it will just reveal that once you reach a rarefied enough level of that kind of thing, that’s what intelligence isI’m not trying to play up GPT-2 or say it’s doing anything more than anyone else thinks it’s doing. I’m trying to play down humans. We’re not that great. GPT-2-like processes are closer to the sorts of things we do than we would like to think.”

GPT-2 As Step Toward General Intelligence

SSC argues the GPT-2 has general abilities in the machine-readable realm of text, language i reality, GPT-2 show a way a general intelligence could be developed in the written world

Ben Evans writes how computational photography needs be plugged into narrow AI ,  a “sensing” AGI is surely harder and needs sensing systems be developed in the machine realm

https://www.ben-evans.com/benedictevans/2019/2/5/cameras-that-understand

Virtual reality modelling, think also AI playing major videogames, should be a tool to train AI to a physical world without having to resort to phisycal sensors and a body. Will it be different ?