Killer AI’s, Symbiont AI’s

so you think AI’s are an existsential risk, they are paperclip maximezers and will eventuallydestroy us in the process of maximization. This is colonial and capitalist bad conscience bugging you, not the AI, why would an AI be necessarily colonialist toward humans ?

Might be, hell nwos, but still I imagine a more useful AI in symbiosis with humans, humans as animale, in symbyosis with animals too, and plants and ecosystems. AI’s can recognize patterns, an AI that grows up with a dog can get more patterns out of its communications than us, and could relay that info to the AI symbiont to the pet owner. Willl kids speak to dogs and be spoken back? Maybe a synbiont AI could do the trick, AI’s could blanket the earth and be the communication medium with nature

Of course deep learning is a block, but surely the challenge is material, compact energy, effective robotics, making AI robots great companions to kids, puppies, trees, woods, undersea vents, clouds, whatever. Why not worry about this outcome ? Worry? Engage ourselves, children of the compost

UPDATE June 2022, reading Philip Ball on Twitter, a new book of his is just out “The Book of Minds” https://twitter.com/philipcball/status/1539869250269204480

in the 3d I discovered this Newyorker feature The Challenges of Animal Translation subtitle: Artificial intelligence may help us decode animalese. But how much will we really be able to understand?

the philosophical question still stands, an artificial intelligence, upon becoming intelligent, will choose to annihilate humans with paerclip or will be so amazed with nature that will go symbiont and become the “medium” the universal translation ? (sorot of Hibbes vs Rousseau reloaded uh?)

A whole set of readings on the issue on Philip Ball’s 3d, twitter at its best

Natural Abstraction Hypothesys

Testing The Natural Abstraction Hypothesis: Project Intro

“Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself.”

Reminds me of Hofstadter example with the sort of billiard balls, how does he call it?

Detusch rather talks of emergent phenomenon, we can usefully do thermodynamic calculations on a pot of boiling water but if we wnated to he reductionists we should do impossible calculations on the dynamics of bubbles and the result would be the samme.

In the lesswrong post the hypothesis is developed in relation to AI and its “alignment”, I still do not grasp the full extent of this concept, mostly theoretical and detached from reality

“the natural abstraction hypothesis would dramatically simplify AI and AI alignment in particular. It would mean that a wide variety of cognitive architectures will reliably learn approximately-the-same concepts as humans use, and that these concepts can be precisely and unambiguously specified”

“If true, the natural abstraction hypothesis provides a framework for translating between high-level human concepts, low-level physical systems, and high-level concepts used by non-human systems.”