“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.” Feynman, 1981
Chemistry is quantum computing’s killer app “quantum computers could aid development of catalysts for clean energy and renewable chemical manufacturing, enable deeper understanding of the enzymes that underlie photosynthesis and the nitrogen cycle, power the discovery of high-temperature superconductors and new materials for solar cells, and much more.”
This is a sponsored article by a UK agency which will help companies to develop applications of quantum computing, but really quantum simulation “One of those near-term applications will be to simulate the behaviour of a simple quantum system, such as a small molecule or the interactions between molecules – which some researchers think could be achieved with around 1000 qubits”
“The most remarkable of those recent arguments is perhaps the Frauchinger-Renner paradox, which demonstrates that quantum mechanics cannot consistently describe the use of itself. If you imagine observers observing observers, Daniela Frauchinger and Renato Renner showed that in some cases the observers cannot agree on what happened – if quantum mechanics is correct. It’s simply not fit to be a fundamental theory of nature.
Another milestone has been a no-go theorem for theories that may underlie quantum mechanics. In 2012, Matthew Pusey, Jonathan Barrett, and Terry Rudolph proved that certain completions of quantum mechanics – that is, theories from which quantum mechanics might derive – are impossible. Now, this may sound like a negative result, but no-go theorems are incredibly helpful for theory development because they narrow down possible options.”
science book writer in the sense he is a write, a journalist writing books and no scientist, but eventually, in lockdown, he decided to study quantum mechanics, really study it, its mathematics, in order to unrstend the theory. Pretty much what I intended to do myself last year and now I am a bit stuck, maybe it’s time to start over
Marletto’s book is indeed titled “The Science of Can and Can’t” it’s about the role of counterfactuals in fostering science growth, It’s about method and so Sabine is on it. I was actually wondering ether I should read Marletto’s book now it is on the list, maybe not
Sabine’s definition of emergence “I have, above, used emergent to mean a property of a composite system that can be derived from the laws of the system’s constituents, but which doesn’t make sense on the level of constituents. Conductivity, for example, is a property of materials, but it makes no sense for individual electrons. Temperature is another example. Waves in water, cyclones, the capacity to self-reproduce—these are all emergent properties.” Thiis is Weak Emergence defintion, the strong one would imply that the emergent propertieds cannot be derivied from the laws of its components at all “There is no known example in the real world for strong emergence (which is why physicists normally use the word “emergence” as synonym for “weak emergence”)”
“Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself.”
Reminds me of Hofstadter example with the sort of billiard balls, how does he call it?
Detusch rather talks of emergent phenomenon, we can usefully do thermodynamic calculations on a pot of boiling water but if we wnated to he reductionists we should do impossible calculations on the dynamics of bubbles and the result would be the samme.
In the lesswrong post the hypothesis is developed in relation to AI and its “alignment”, I still do not grasp the full extent of this concept, mostly theoretical and detached from reality
“the natural abstraction hypothesis would dramatically simplify AI and AI alignment in particular. It would mean that a wide variety of cognitive architectures will reliably learn approximately-the-same concepts as humans use, and that these concepts can be precisely and unambiguously specified”
“If true, the natural abstraction hypothesis provides a framework for translating between high-level human concepts, low-level physical systems, and high-level concepts used by non-human systems.”
The incipt does indeed smell of classic “The argument goes like this. Jonah got swallowed by a whale. But the Bible says Jonah got swallowed by a big fish. So the Bible seems to think whales are just big fish. Therefore the Bible is fallible. Therefore, the Bible was not written by God.”
Charlie Stross, SF writer, on reality stickness and ability to forecast into the future:
“You don’t need a science fiction writer to tell you this stuff: 90% of the world of tomorrow plus ten years is obvious to anyone with a weekly subscription to New Scientist and more imagination than a doorknob.
What’s less obvious is the 10% of the future that isn’t here yet. Of that 10%, you used to be able to guess most of it — 9% of the total — by reading technology road maps in specialist industry publications. We know what airliners Boeing and Airbus are starting development work on, we can plot the long-term price curve for photovoltaic panels, read the road maps Intel and ARM provide for hardware vendors, and so on. (..)
(..) However, this stuff ignores what Donald Rumsfeld named “the unknown unknowns”. About 1% of the world of ten years hence always seems to have sprung fully-formed from the who-ordered-THAT dimension: we always get landed with stuff nobody foresaw or could possibly have anticipated, unless they were spectacularly lucky guessers or had access to amazing hallucinogens. And this 1% fraction of unknown unknowns regularly derails near-future predictions.”
Legibility is a “seeing like a state” term, James Scott terminology but it sticks lately. Scott Alexander on the experts, journalism and legibility. In different terms, like Nate Silver put it, the surpirsing gap between what you read in the news about Covid and what you could gather yourself from preprints and experts’ twitter threads
this is not what I wanted to write, I got carried away by the legibility concept which is probably misappropriated and used outside its intended reach. Anyway I wanted to say really, reality ius mostly sticky and the part that sticks from a decade to the next moves in ways you can guess with a proper knowledge strategy. )0% stays the same, )% changes in this way, 1% can’t be easily guessed and that probably chages the meaning of all the rest
a new demonstration of #quantumsupremacy via #bosonSampling, i.e. solving a calculation problem made with a quantum computer in a time impossible for a traditional computer
It is not therefore a question of having demonstrated a universal, foolproof, scalable or simply useful computer. But the Boson Sampling approach could be useful to go in that direction
The demonstration was made in China, for all the answers, for what we can understand, I refer you to Scott Aaronson who is the referee of the report.
Funny Story as told by Scott:
When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there”
some Nobel prize was awarded 50 years ago to someone who claimed a protein’s shape could be derived from the atoms building it, so it started a rece at guessing how a protein would fold, folding proteins in short. It came to artificial intelligence news this week, Alpha Fold of Deepmin won some protein-folding olympics https://www.nature.com/articles/d41586-020-03348-4
while reading PIHKAL di Shulgin you can find a chapter named the 4-posiyion where it shows how halucinogenic potency derives from molecules taking the 4- position in a benzene circlae and stay there while in our body
As cosmic rays strike the atmosphere, the collisions create pions that decay into muons that more often spin one way than the other because of a fundamental asymmetry in the laws of nature. These asymmetric muons, raining down, mutate more right-handed biomolecules…