Sabine Hossenfelder on Chiara Marletto

here http://nautil.us/blog/the-end-of-reductionism-could-be-nigh-or-not?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication

Chiara Marletto worked with David Deutsch on Constructor Theory https://en.wikipedia.org/wiki/Constructor_theory ” a new mode of explanation in fundamental physics, first sketched out by David Deutsch, a quantum physicist at the University of Oxford, in 2012.[1][2] Constructor theory expresses physical laws exclusively in terms of what physical transformations, or tasks, are possible versus which are impossible, and why”

Marletto’s book is indeed titled “The Science of Can and Can’t” it’s about the role of counterfactuals in fostering science growth, It’s about method and so Sabine is on it. I was actually wondering ether I should read Marletto’s book now it is on the list, maybe not

Sabine’s definition of emergence “I have, above, used emergent to mean a property of a composite system that can be derived from the laws of the system’s constituents, but which doesn’t make sense on the level of constituents. Conductivity, for example, is a property of materials, but it makes no sense for individual electrons. Temperature is another example. Waves in water, cyclones, the capacity to self-reproduce—these are all emergent properties.” Thiis is Weak Emergence defintion, the strong one would imply that the emergent propertieds cannot be derivied from the laws of its components at all “There is no known example in the real world for strong emergence (which is why physicists normally use the word “emergence” as synonym for “weak emergence”)”

Natural Abstraction Hypothesys

Testing The Natural Abstraction Hypothesis: Project Intro

“Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself.”

Reminds me of Hofstadter example with the sort of billiard balls, how does he call it?

Detusch rather talks of emergent phenomenon, we can usefully do thermodynamic calculations on a pot of boiling water but if we wnated to he reductionists we should do impossible calculations on the dynamics of bubbles and the result would be the samme.

In the lesswrong post the hypothesis is developed in relation to AI and its “alignment”, I still do not grasp the full extent of this concept, mostly theoretical and detached from reality

“the natural abstraction hypothesis would dramatically simplify AI and AI alignment in particular. It would mean that a wide variety of cognitive architectures will reliably learn approximately-the-same concepts as humans use, and that these concepts can be precisely and unambiguously specified”

“If true, the natural abstraction hypothesis provides a framework for translating between high-level human concepts, low-level physical systems, and high-level concepts used by non-human systems.”

of Broccoli and Chestnuts

an horse chestnut is genetically closer to broccoli than to a sweet chestnut

There is no such a thing as a tree, phylogenetically like there is carcinization for sea arthropods tending to become crabs, so there is dendronization, tweak the expression of a couple of genes, heat the right and a green plant develop wood

and there’s no such a thing as wood, althought evident is it, from the point of view of evolution

apparently there is a classic Scott Alexander I should read here THE CATEGORIES WERE MADE FOR MAN, NOT MAN FOR THE CATEGORIES

The incipt does indeed smell of classic “The argument goes like this. Jonah got swallowed by a whale. But the Bible says Jonah got swallowed by a big fish. So the Bible seems to think whales are just big fish. Therefore the Bible is fallible. Therefore, the Bible was not written by God.”

Future “legibility”

Charlie Stross, SF writer, on reality stickness and ability to forecast into the future:

“You don’t need a science fiction writer to tell you this stuff: 90% of the world of tomorrow plus ten years is obvious to anyone with a weekly subscription to New Scientist and more imagination than a doorknob.

What’s less obvious is the 10% of the future that isn’t here yet. Of that 10%, you used to be able to guess most of it — 9% of the total — by reading technology road maps in specialist industry publications. We know what airliners Boeing and Airbus are starting development work on, we can plot the long-term price curve for photovoltaic panels, read the road maps Intel and ARM provide for hardware vendors, and so on. (..)

(..) However, this stuff ignores what Donald Rumsfeld named “the unknown unknowns”. About 1% of the world of ten years hence always seems to have sprung fully-formed from the who-ordered-THAT dimension: we always get landed with stuff nobody foresaw or could possibly have anticipated, unless they were spectacularly lucky guessers or had access to amazing hallucinogens. And this 1% fraction of unknown unknowns regularly derails near-future predictions.”

from this speech on, of all things, AI https://www.antipope.org/charlie/blog-static/2019/12/artificial-intelligence-threat.html

Legibility is a “seeing like a state” term, James Scott terminology but it sticks lately. Scott Alexander on the experts, journalism and legibility. In different terms, like Nate Silver put it, the surpirsing gap between what you read in the news about Covid and what you could gather yourself from preprints and experts’ twitter threads

The point of Scott is that experts in public position and journalists with duty to report to the public have to strip down what the want to communicate in order to make it “legible” to the wide audience, so information and even message geto lost in mainstreammedia https://astralcodexten.substack.com/p/journalism-and-legible-expertise

this is not what I wanted to write, I got carried away by the legibility concept which is probably misappropriated and used outside its intended reach. Anyway I wanted to say really, reality ius mostly sticky and the part that sticks from a decade to the next moves in ways you can guess with a proper knowledge strategy. )0% stays the same, )% changes in this way, 1% can’t be easily guessed and that probably chages the meaning of all the rest

Quantum supremacy take 2 (and house warming)

a new demonstration of #quantumsupremacy via #bosonSampling, i.e. solving a calculation problem made with a quantum computer in a time impossible for a traditional computer

It is not therefore a question of having demonstrated a universal, foolproof, scalable or simply useful computer. But the Boson Sampling approach could be useful to go in that direction

The demonstration was made in China, for all the answers, for what we can understand, I refer you to Scott Aaronson who is the referee of the report.

Funny Story as told by Scott:

When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there”

the conversation on Twitter, tha authors did not get properly billed, their sponsor footed the bill, also becasu all went into residentail heating of the nearby houses https://twitter.com/preskill/status/1334900457894891520

of Biochemistry and Geometry

some Nobel prize was awarded 50 years ago to someone who claimed a protein’s shape could be derived from the atoms building it, so it started a rece at guessing how a protein would fold, folding proteins in short. It came to artificial intelligence news this week, Alpha Fold of Deepmin won some protein-folding olympics https://www.nature.com/articles/d41586-020-03348-4

Twitter reminds me how important and difficult to deal with, is atom’s position in a molecule, detrmining properties https://twitter.com/curiouswavefn/status/1334562488495423489

while reading PIHKAL di Shulgin you can find a chapter named the 4-posiyion where it shows how halucinogenic potency derives from molecules taking the 4- position in a benzene circlae and stay there while in our body

Randomized trials and twitter

this is a lot #meta and also a but #GAC (italian, unnervingly obvious)

112 papers were randomly chosen to be shared on twitter by a group with ~58k followers or to not be shared. Papers that were tweeted accumulated 4x more citations compared to non-tweeted papers over 1yr.

Meta you know, a randomised trial of paper surely describing randomised experiments

GAC because it’s the network baby, read Barabasi’s link e you know that if you look for a job and tell family and frineds you get nothing, but if tell people outside your usual creche you will find. So tell a 58,000-strong Twitter group.

Barabasi went on writing precisely a book to explain the infallible formula of success, the book is titled

The Formula: The Universal Laws of Success

how a fast learning curve looks like

Screenshot 2020-03-08 at 10.35.07

gore edition, new cases and deaths, divergence in growth as case grows and experience is built on how to treat critical patients of a novel disease

But in a “learning curve” framework that works you need to be able to cope with demand, your service should not run into some sort of “diminishing return” at the margin or in a more discrete case, being able to serve the customers in order to have the very same outcomes that you are measutring gains against

In other words, ICU’s might have overflown, critical patients might have gone untreated, mortality might have spiked as a consequence, nd thereafter kept a steeper profile

Screenshot 2020-03-11 at 00.05.17

the disaster medicine protcol was issued by the society of anestesiologists on the 6th, clearly a situation was building up and its exponential nature was not lost on doctors fighting the disease https://www.esanum.it/today/posts/covid-19-le-raccomandazioni-della-siaarti