a new demonstration of #quantumsupremacy via #bosonSampling, i.e. solving a calculation problem made with a quantum computer in a time impossible for a traditional computer
It is not therefore a question of having demonstrated a universal, foolproof, scalable or simply useful computer. But the Boson Sampling approach could be useful to go in that direction
The demonstration was made in China, for all the answers, for what we can understand, I refer you to Scott Aaronson who is the referee of the report.
Funny Story as told by Scott:
When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there”
some Nobel prize was awarded 50 years ago to someone who claimed a protein’s shape could be derived from the atoms building it, so it started a rece at guessing how a protein would fold, folding proteins in short. It came to artificial intelligence news this week, Alpha Fold of Deepmin won some protein-folding olympics https://www.nature.com/articles/d41586-020-03348-4
while reading PIHKAL di Shulgin you can find a chapter named the 4-posiyion where it shows how halucinogenic potency derives from molecules taking the 4- position in a benzene circlae and stay there while in our body
As cosmic rays strike the atmosphere, the collisions create pions that decay into muons that more often spin one way than the other because of a fundamental asymmetry in the laws of nature. These asymmetric muons, raining down, mutate more right-handed biomolecules…
this is a lot #meta and also a but #GAC (italian, unnervingly obvious)
112 papers were randomly chosen to be shared on twitter by a group with ~58k followers or to not be shared. Papers that were tweeted accumulated 4x more citations compared to non-tweeted papers over 1yr.
Meta you know, a randomised trial of paper surely describing randomised experiments
GAC because it’s the network baby, read Barabasi’s link e you know that if you look for a job and tell family and frineds you get nothing, but if tell people outside your usual creche you will find. So tell a 58,000-strong Twitter group.
Barabasi went on writing precisely a book to explain the infallible formula of success, the book is titled
An amazing *randomized trial* on Twitter+academia:
112 papers were randomly chosen to be shared on twitter by a group with ~58k followers or to not be shared. Papers that were tweeted accumulated 4x more citations compared to non-tweeted papers over 1yr.https://t.co/82doGP0khapic.twitter.com/6qHECDQbtq
gore edition, new cases and deaths, divergence in growth as case grows and experience is built on how to treat critical patients of a novel disease
But in a “learning curve” framework that works you need to be able to cope with demand, your service should not run into some sort of “diminishing return” at the margin or in a more discrete case, being able to serve the customers in order to have the very same outcomes that you are measutring gains against
In other words, ICU’s might have overflown, critical patients might have gone untreated, mortality might have spiked as a consequence, nd thereafter kept a steeper profile
“without models, there are no data. I’m not talking about the difference between “raw” and “cooked” data. I mean this literally. Today, no collection of signals or observations—even from satellites, which can “see” the whole planet—becomes global in time and space without first passing through a series of data models.”
“Paul N. Edwards – A Vast Machine_ Computer Models, Climate Data, and the Politics of Global Warming”
” A reanalysis project involves reprocessing observational data spanning an extended historical period using a consistent modern analysis system, to produce a dataset that can be used for meteorological and climatological studies.” https://en.wikipedia.org/wiki/Atmospheric_reanalysis
“You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes (..) But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do.”
Pellagra ravaged some areas of Europe were corn was staple long after it was imported from the Americas. The Europeans did not import the right method to prepare it which would enrich it of vitamin E and did not develop a remedy for centuries. Maybe water rats would have done better
well, not quiet, the article is the story of the hot debate around Hawking’s shuttercock model
“In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.
Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point.”
We’re not building a machine that calculates answers, he says; instead, we’re discovering questions. Nature’s shape-shifting laws seem to be the answer to an unknown mathematical question. This is why Arkani-Hamed and his colleagues find their studies of the amplituhedron so promising. Calculating the volume of the amplituhedron is a question in geometry—one that mathematicians might have pondered, had they discovered the object first. Somehow, the answer to the question of the amplituhedron’s volume describes the behavior of particles—and that answer, in turn, can be rewritten in terms of space and time