Quantum simulation

“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.” Feynman, 1981

Waiting for the Quantum Simulation Revolution

Chemistry is quantum computing’s killer app “quantum computers could aid development of catalysts for clean energy and renewable chemical manufacturing, enable deeper understanding of the enzymes that underlie photosynthesis and the nitrogen cycle, power the discovery of high-temperature superconductors and new materials for solar cells, and much more.”

Suddenly Sabine, who goes on saying “qubits are cheap. it’s not the number of qubits per se that’s the problem. the problem is getting out anything that isn’t just noise” https://twitter.com/skdh/status/1456293385229189122

This is a sponsored article by a UK agency which will help companies to develop applications of quantum computing, but really quantum simulation “One of those near-term applications will be to simulate the behaviour of a simple quantum system, such as a small molecule or the interactions between molecules – which some researchers think could be achieved with around 1000 qubits”

Industry engagement prepares UK for quantum transformation

Quantum reads oct 27th

Sabine Hossenfelder continues her crusade against Quantum theory: Who is Killing Physics

Arguments about quantum theory incosistencies:

“The most remarkable of those recent arguments is perhaps the Frauchinger-Renner paradox, which demonstrates that quantum mechanics cannot consistently describe the use of itself. If you imagine observers observing observers, Daniela Frauchinger and Renato Renner showed that in some cases the observers cannot agree on what happened – if quantum mechanics is correct. It’s simply not fit to be a fundamental theory of nature.

Another milestone has been a no-go theorem for theories that may underlie quantum mechanics. In 2012, Matthew Pusey, Jonathan Barrett, and Terry Rudolph proved that certain completions of quantum mechanics – that is, theories from which quantum mechanics might derive – are impossible. Now, this may sound like a negative result, but no-go theorems are incredibly helpful for theory development because they narrow down possible options.”

At the end of the article I find this one by John Horgan, science book writer: My Quantum Experiment

science book writer in the sense he is a write, a journalist writing books and no scientist, but eventually, in lockdown, he decided to study quantum mechanics, really study it, its mathematics, in order to unrstend the theory. Pretty much what I intended to do myself last year and now I am a bit stuck, maybe it’s time to start over

Sabine Hossenfelder on Chiara Marletto

here http://nautil.us/blog/the-end-of-reductionism-could-be-nigh-or-not?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication

Chiara Marletto worked with David Deutsch on Constructor Theory https://en.wikipedia.org/wiki/Constructor_theory ” a new mode of explanation in fundamental physics, first sketched out by David Deutsch, a quantum physicist at the University of Oxford, in 2012.[1][2] Constructor theory expresses physical laws exclusively in terms of what physical transformations, or tasks, are possible versus which are impossible, and why”

Marletto’s book is indeed titled “The Science of Can and Can’t” it’s about the role of counterfactuals in fostering science growth, It’s about method and so Sabine is on it. I was actually wondering ether I should read Marletto’s book now it is on the list, maybe not

Sabine’s definition of emergence “I have, above, used emergent to mean a property of a composite system that can be derived from the laws of the system’s constituents, but which doesn’t make sense on the level of constituents. Conductivity, for example, is a property of materials, but it makes no sense for individual electrons. Temperature is another example. Waves in water, cyclones, the capacity to self-reproduce—these are all emergent properties.” Thiis is Weak Emergence defintion, the strong one would imply that the emergent propertieds cannot be derivied from the laws of its components at all “There is no known example in the real world for strong emergence (which is why physicists normally use the word “emergence” as synonym for “weak emergence”)”

Natural Abstraction Hypothesys

Testing The Natural Abstraction Hypothesis: Project Intro

“Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself.”

Reminds me of Hofstadter example with the sort of billiard balls, how does he call it?

Detusch rather talks of emergent phenomenon, we can usefully do thermodynamic calculations on a pot of boiling water but if we wnated to he reductionists we should do impossible calculations on the dynamics of bubbles and the result would be the samme.

In the lesswrong post the hypothesis is developed in relation to AI and its “alignment”, I still do not grasp the full extent of this concept, mostly theoretical and detached from reality

“the natural abstraction hypothesis would dramatically simplify AI and AI alignment in particular. It would mean that a wide variety of cognitive architectures will reliably learn approximately-the-same concepts as humans use, and that these concepts can be precisely and unambiguously specified”

“If true, the natural abstraction hypothesis provides a framework for translating between high-level human concepts, low-level physical systems, and high-level concepts used by non-human systems.”

of Broccoli and Chestnuts

an horse chestnut is genetically closer to broccoli than to a sweet chestnut

There is no such a thing as a tree, phylogenetically like there is carcinization for sea arthropods tending to become crabs, so there is dendronization, tweak the expression of a couple of genes, heat the right and a green plant develop wood

and there’s no such a thing as wood, althought evident is it, from the point of view of evolution

apparently there is a classic Scott Alexander I should read here THE CATEGORIES WERE MADE FOR MAN, NOT MAN FOR THE CATEGORIES

The incipt does indeed smell of classic “The argument goes like this. Jonah got swallowed by a whale. But the Bible says Jonah got swallowed by a big fish. So the Bible seems to think whales are just big fish. Therefore the Bible is fallible. Therefore, the Bible was not written by God.”

Future “legibility”

Charlie Stross, SF writer, on reality stickness and ability to forecast into the future:

“You don’t need a science fiction writer to tell you this stuff: 90% of the world of tomorrow plus ten years is obvious to anyone with a weekly subscription to New Scientist and more imagination than a doorknob.

What’s less obvious is the 10% of the future that isn’t here yet. Of that 10%, you used to be able to guess most of it — 9% of the total — by reading technology road maps in specialist industry publications. We know what airliners Boeing and Airbus are starting development work on, we can plot the long-term price curve for photovoltaic panels, read the road maps Intel and ARM provide for hardware vendors, and so on. (..)

(..) However, this stuff ignores what Donald Rumsfeld named “the unknown unknowns”. About 1% of the world of ten years hence always seems to have sprung fully-formed from the who-ordered-THAT dimension: we always get landed with stuff nobody foresaw or could possibly have anticipated, unless they were spectacularly lucky guessers or had access to amazing hallucinogens. And this 1% fraction of unknown unknowns regularly derails near-future predictions.”

from this speech on, of all things, AI https://www.antipope.org/charlie/blog-static/2019/12/artificial-intelligence-threat.html

Legibility is a “seeing like a state” term, James Scott terminology but it sticks lately. Scott Alexander on the experts, journalism and legibility. In different terms, like Nate Silver put it, the surpirsing gap between what you read in the news about Covid and what you could gather yourself from preprints and experts’ twitter threads

The point of Scott is that experts in public position and journalists with duty to report to the public have to strip down what the want to communicate in order to make it “legible” to the wide audience, so information and even message geto lost in mainstreammedia https://astralcodexten.substack.com/p/journalism-and-legible-expertise

this is not what I wanted to write, I got carried away by the legibility concept which is probably misappropriated and used outside its intended reach. Anyway I wanted to say really, reality ius mostly sticky and the part that sticks from a decade to the next moves in ways you can guess with a proper knowledge strategy. )0% stays the same, )% changes in this way, 1% can’t be easily guessed and that probably chages the meaning of all the rest

Quantum supremacy take 2 (and house warming)

a new demonstration of #quantumsupremacy via #bosonSampling, i.e. solving a calculation problem made with a quantum computer in a time impossible for a traditional computer

It is not therefore a question of having demonstrated a universal, foolproof, scalable or simply useful computer. But the Boson Sampling approach could be useful to go in that direction

The demonstration was made in China, for all the answers, for what we can understand, I refer you to Scott Aaronson who is the referee of the report.

Funny Story as told by Scott:

When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there”

the conversation on Twitter, tha authors did not get properly billed, their sponsor footed the bill, also becasu all went into residentail heating of the nearby houses https://twitter.com/preskill/status/1334900457894891520

of Biochemistry and Geometry

some Nobel prize was awarded 50 years ago to someone who claimed a protein’s shape could be derived from the atoms building it, so it started a rece at guessing how a protein would fold, folding proteins in short. It came to artificial intelligence news this week, Alpha Fold of Deepmin won some protein-folding olympics https://www.nature.com/articles/d41586-020-03348-4

Twitter reminds me how important and difficult to deal with, is atom’s position in a molecule, detrmining properties https://twitter.com/curiouswavefn/status/1334562488495423489

while reading PIHKAL di Shulgin you can find a chapter named the 4-posiyion where it shows how halucinogenic potency derives from molecules taking the 4- position in a benzene circlae and stay there while in our body