Roger Penrose Nobel

Nobel in Physics for something black holes but I know nothing of Penrose, except I liked very much moravec’s dialogue of Penrose with a robot

https://frc.ri.cmu.edu/~hpm/project.archive/general.articles/1994/Penrose.SM.review

Penrose believes that intelligence is not computational, Moravec begged to differ in his own way 🙂

Penrose on intelligence https://www.youtube.com/watch?v=hXgqik6HXc0

Orthogonality, USA elections and AI

in my usa bubble on twitter, mostly founders and founder phylosophers, Robin Hanson is makeng rounds with his pos that advises To Oppose Polarization, Tug Sideways

Americans a re under shock after 2 boomers, worse, 2 silent generationers 1942 and 1946 babled at each other wthou much sense for 1 hour. Politics is a multidimensional tug-of-war and in certain cases it is better mind your own fight and tug sideways

orthogonality is a condition thay ensures that AIs need not obliterate us humans https://arbital.com/p/orthogonality/ found on the twitter of

fonder of the rationalisic community lesswrong and this nice arbital site with lots of tutorials

the purpose of adovcting otrhogonality is different

to Hanson when politics gets fsctituois better go orthogonal

AIs in Moravec dialogue with … think of him as too factitious and cannot bother more with the real 4-dim world, thay have gone their own multidimensional way

here we come at orthogonality is Y meaning, AIs neednot get confrontational with us, there’s plenty of orthonogal room 🙂

il giorno delle parole @@@morfiche

“I clarify: that was truth, not humor. The GPT setup is precisely isomorphic to training a (huge) neural net to compress online text to ever-smaller strings, then using the trained net to decompress random bytes.”

isomorphic: mapping between 2 structures that preserver the structure and can be reversed.

” as organisms become more and more complex through evolution, they need to model reality with increasing accuracy to stay fit. At all times, their representation of reality must be homomorphic with reality itself. Or in other words, the true structure of our world must be preserved when converted into your brain’s representation of it.”

from here https://savsidorov.substack.com/p/tldr-the-interface-theory-of-perception

Homomorphism in Algebra is a structure-preserivng map between 2 algebric structures of the same type (not reversible like isomorphic?)

Brain links 16 sep 2020

Joscha Bach on GPT-3 https://www.youtube.com/watch?v=FMfA6i60WDA GPT-3 moves in a semantic space, masters relations between words, but can only deepfake understading

Joshua built the MicroPSI architecture based on PSI Theory https://en.wikipedia.org/wiki/Psi-theory#MicroPsi_architecture

Curious Wavefunction makes a history of information and thermodynamics http://wavefunction.fieldofscience.com/2020/07/brains-computation-and-thermodynamics.html

ends it with the idea that our brain is a mixture of digital and analog processes, as posited by Von Neumann

Sav Sidorov another Joscha Bach video with highlights https://savsidorov.substack.com/p/tldr-joscha-bach-artificial-consciousness

“Some people think that a simulation can’t be conscious, and only a physical system can, but they got it completely backwards. A physical system cannot be conscious, only a simulation could be conscious.”

AI = General methods + power of the computers

like Communism was Eletrification + power of the soviet.

The bitter lesson o fRich Sutton http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Screenshot 2020-09-11 at 11.09.08

from Gwern May newsletter on scaling and metalearning:

“The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is ‘just’ simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale.”

This is related somehow, distributed intelligence and fungi from The Curious Wavefunction, Life Distributed http://wavefunction.fieldofscience.com/2020/09/life-distributed.html

Scaling hypothesis in AI

start from Gwern here https://www.gwern.net/newsletter/2020/05#gpt-3

parameters scaling in GPT-3 does not run into linear scaling of performance nor dimishing returns. Rather it shows metalearning enhancing the performance

It was forecast by Moraves and since we are in a fat tail phenomenon this holds true: “the scaling hypothesis is so unpopular an idea, and difficult to prove in advance rather than as a fait accompli“. Before GPT-3 another epiphany on the scaling was the google cat moment which started the deep learning craze

Another idea which I like is that models like GPT-3 are definitely cheap and if they show superlinear growth it is a no brainer to go for bigger and more complex models, it is along way before matching the billions of expenses for Cern or nuclear fusion.

Carig Venter synthetic bacteria project cost us 40 milion, ground braking orojects costing so little should not be foregone

BTW to grasp the idea of how there could be a scaling benefit in growing deep learning sizes, go no further that a simple, unfounded but suggestive analogy with Metcalfe law of networks, network value grows with the square of nodes.

Dick joke

“A man is at the doctor’s office, and the doctor tells him, “I’ve got some good news and some bad news for you.” The man says, “Well, I can’t take the bad news right now, so give me the good news first.”

/ The doctor says, “Well, the good news is that you have an 18-inch penis.”

The man looks stunned for a moment, and then asks, “What’s the bad news?”

/ The doctor says, “Your brain’s in your dick”

Dick joke? anzi Doctor & dick joke su Linkedin, perchè?

Se ti ha fatto ridere pensa che è stato generato da GPT-3, il modello di generazione di testo sviluppato da OpenAI, successore del molto pubblicizzato GPT-2 rispetto al quale mostra 117x dimensioni: 115 miliardi di parametri

la curva di aumento dei parametri dei modelli segue ancora una power curve, su un sentiero di crescita esponenziale

con l’aumento della dimensione migliorano anche il meta apprendimento e la stabilità, la scala è ancora una strategia vincente per la performance dei modelli di neural networks

GPT-3 riesce a creare testi soprendenti a partire da una travccia e anche a inventare batturte. Diciamo però che non c’ corrispondenza tra testo e realtà nel modello, GPT-3 è un idiota saggio, ma molto saggio. questo lo leggete da Gwern

Alla prova anhche con la matematica e gli scacchi riesce ad avere prestazione decente, questo potreste leggerlo da Scoot Alòexander di Slate Star Codex, blog e comunità neorazionalista, best blog in town. Solo che il New Yourk Times minaccia di pubblicare il vero nome del blogger, esponendolo a danno professioanle e pericolo e allora Scott ha messo offline il suto

che mondo, AI fanno dick jokes a giornali fanno doxxing

Deeplearning and organic chemistry

there’s things so specific that you never get to know them and if you do you are not able to retrieve unless you have noted down its name or some keyword

“neural networks applied to computed molecular fingerprints or expert-crafted descriptors and graph convolutional neural networks that construct a learned molecular representation by operating on the graph structure of the molecule.”

from here https://pubs.acs.org/doi/abs/10.1021/acs.jcim.9b00237

more AI and drug discovery though

the idea that AI reinforces totalitarian weaknesses

Fear not China, Seeing Like a Finite State Machine

“The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood

The latest link is where Italo Calvino can be used as perfect metaphor of epistemic confrontation in the field of politics