Sauron is communist, is the Palantir a libertarian tool ?

Pether Thiel in a speech, if blockchain is libertarian, can’t we say AI is communist ? He does not say where his Plantyr company stands though

“If we were to tell the two technological stories about scale at this point, one of them is still the sort of crypto revolution which is still going on with Bitcoin and has this sort of this libertarian potential. But I think there is sort of an alternate tech story which is about AI, big data, centralized databases, surveillance, which does not seem libertarian at all. You’re sort of going to have the big eye of Sauron watching you at all times, in all places. And I often think that we live in a world where the ideology always has a certain veil on it. So if we say that crypto is libertarian, why can’t we say that AI is communist, and at least have the sort of alternate account of scale?”

https://www.manhattan-institute.org/events/2019-wriston-lecture-end-computer-age-thiel#transcript

Screenshot 2019-11-20 at 13.16.59

AI diffusion curve

1 some AI has percolated into AWS; Azure and other cloud services

2 some AI is done quietly under the hood in great many tech companies

3 in some companies AI has enabled uniue features, but the company does not go around selling “my AI will change your business” but rather “I alone have a cool feature you can’t do without

4 AI is applied to non-tech real world problems where data are in silos away from google and china

of course there is a stack of cloud, algos, devices that allow to unbundle old world problems.

Ai is a tech that is defining its S-curve much like databases, ML is the new SQL “\over the past few decades we moved through databases, ‘productivity’, client-server, open-source, SaaS and Cloud. In parallel with new client platforms, we had new waves of architecture or development model, and that’s really a better way to look at machine learning – ML is the new SQL (and maybe crypto is in part the new open source)”

https://www.ben-evans.com/benedictevans/2019/10/4/machine-learning-deployment

the problem is AI or monopolies ?

reasoning on this, it is like shared economy workers, new app-dependent freelancers are digging their won grave by working with Uber.

App economies give increasing return to scale, capital piles up into winners, dominant positions are achieved and used to push labor-saving autonomy technologies that are not yet developed nor ready for market. Dominant position in this way have a distortionary effect which will have self-fulfilling effects on employment.

But in the end it is not just automation and AI to create unemployment, it will be monopolies striving for labor-saving technologies while resources could have been used to augmenting solutions.

_______

here Huawei founder seems to suggest that AI will bring industrial advantage back to countries most educated with most capital to spend, a polarization in international economic performance, where Switzerlad with robots will produce like a country of 80 millions, Germany like a 800 million contry etc. One order of magnitude, exponential technology. While less “developed” countries (less capital, less skills) will keep producing things not apt for robots and achieve le scale/network economies.

(ITW is here http://xinsheng.huawei.com/cn/index.php?app=group&mod=Bbs&act=detail&tid=4279851)

 

Robots and productivity

Krugman says not to blame robots for low wages since productivity is stagnating

Acemoglu says we should worry now since the transition to AI won’t be like the transition from agriculture to industry, productivity will increase, share of labour decrease, salries might decrease

https://www.project-syndicate.org/commentary/ai-automation-labor-productivity-by-daron-acemoglu-and-pascual-restrepo-2019-03

 

Acemoglu again says that taxation favours robots over workers, i.e. ccelerated depreciation of investments in machinery and software in USA put tax on robots at 5% vs tax on humans at 28% https://www.brookings.edu/bpea-articles/does-the-u-s-tax-code-favor-automation/

Screenshot 2020-09-21 at 16.07.20

a law on how computing consumes less than radio-ing

“Transmitting bits of information, even with approaches like bluetooth low energy, is in the tens to hundreds of milliwatts in the best of circumstances at comparatively short range. The efficiency of radio transmission doesn’t seem to be improving dramatically over time either, there seem to be some tough hurdles imposed by physics that make improvements hard.

On a happier note, capturing data through sensors doesn’t suffer from the same problem. There are microphones, accelerometers, and even image sensors that operate well below a milliwatt, even down to tens of microwatts. The same is true for arithmetic. Microprocessors and DSPs are able to process tens or hundreds of millions of calculations for under a milliwatt, even with existing technologies, and much more efficient low-energy accelerators are on the horizon.”

What this means is that most data that’s being captured by sensors in the embedded world is just being discarded, without being analyzed at all.

Scaling machine learning models to embedded devices

UPDATE an example of things moving in that space, startup with chip to move AI to the “edge” https://techcrunch.com/2019/05/20/quadric-io-raises-15m-to-build-a-plug-and-play-supercomputer-for-autonomous-systems/

giustizia predittiva, giustizia totalitaria

se insegniamo il Marxismo-leninismo ad una intelligenza artificiale, arriverà alle purghe staliniane? Pensiero stimolato da Hanna Arendt sul Totalitarismo, sul “delitto possibile” pag 646:

“l’evoluzione dell’URSS potrebbe provocare una crisi, una crisi potrebbe condurre al rovesciamento della dittatura di Stalin, ciò potrebbe indebolire la potenza militare del paese e la situazione creata da un’eventuale guerra potrebbe indurre il nuovo governo a firmare una tregua o a concludere un’alleanza con Hitler. Forte di tale conclusione, Stalin passe a dichiarare che c’era, in combutta con Hitler, una congiura per rovesciare il governo”

necessità storica, previsione, modelli blackbox 🙂

If we teach Marxism-Leninism to an artificial intelligence, will it get to Stalin’s purges? Thought stimulated by Hanna Arendt on Totalitarianism, on the “possible crime” page 646:

“The evolution of the USSR could cause a crisis, a crisis could lead to the overthrow of the dictatorship of Stalin, this could weaken the military power of the country and the situation created by a possible war could induce the new government to sign a truce or to conclude an alliance with Hitler. With this conclusion in mind, Stalin went on to declare that there was, in conjunction with Hitler, a conspiracy to overthrow the government”.

historical need, forecast, blackbox models 🙂

Translated with http://www.DeepL.com/Translator

 

AI and intuition

“It is really Python + intuition” me pitching this book to L.

Screenshot 2019-03-30 at 12.45.59.png

https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438

really, AI people are fixated with the intuition behind neural networks and all the architectural variations that comes with it. I love all the graphocal explanations of techniques that really get me to visualize and understand what happens. But then, can we make Ai explainable tru intuintion?

the Concept of Intuition in Artificial Intelligence paper

Fake Intuitive Explanations in AI

 

 

good old article on the age of machines

https://www.lrb.co.uk/v37/n05/john-lanchester/the-robots-are-coming

“A thorough, considered and disconcerting study of that possibility was undertaken by two Oxford economists, Carl Benedikt Frey and Michael Osborne, in a paper from 2013 called ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’

review of

-Tyler Cowen The End of Average

-Second Machine Age Bjodhebfhwe etc etc

productivity, employment and robot deflation are dealt with “It says a lot about the current moment that as we stand facing a future which might resemble either a hyper-capitalist dystopia or a socialist paradise, the second option doesn’t get a mention”

Computation vs. domain knowledge in AI

the Bitter Lesson of Rich Sutton says that progress in 70 years of Ai has come from general methods relying on computation power which has increased from Moore Law. Any method based on developing specific knowledge of the field has failed, while “brute force”

the Better Lesson of Rodney Brooks retorts that Moore Law might yield doubling every 20 years, Dennard scaling (energy invariance) broke down in 2006 so there is not so much computing power to waste and reserach is very active on network architectures and custom-designed chips.

In a way both are right, speaking of different levels of generalizations. Sutton:

“In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.