“Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself.”
Reminds me of Hofstadter example with the sort of billiard balls, how does he call it?
Detusch rather talks of emergent phenomenon, we can usefully do thermodynamic calculations on a pot of boiling water but if we wnated to he reductionists we should do impossible calculations on the dynamics of bubbles and the result would be the samme.
In the lesswrong post the hypothesis is developed in relation to AI and its “alignment”, I still do not grasp the full extent of this concept, mostly theoretical and detached from reality
“the natural abstraction hypothesis would dramatically simplify AI and AI alignment in particular. It would mean that a wide variety of cognitive architectures will reliably learn approximately-the-same concepts as humans use, and that these concepts can be precisely and unambiguously specified”
“If true, the natural abstraction hypothesis provides a framework for translating between high-level human concepts, low-level physical systems, and high-level concepts used by non-human systems.”
The incipt does indeed smell of classic “The argument goes like this. Jonah got swallowed by a whale. But the Bible says Jonah got swallowed by a big fish. So the Bible seems to think whales are just big fish. Therefore the Bible is fallible. Therefore, the Bible was not written by God.”
Charlie Stross, SF writer, on reality stickness and ability to forecast into the future:
“You don’t need a science fiction writer to tell you this stuff: 90% of the world of tomorrow plus ten years is obvious to anyone with a weekly subscription to New Scientist and more imagination than a doorknob.
What’s less obvious is the 10% of the future that isn’t here yet. Of that 10%, you used to be able to guess most of it — 9% of the total — by reading technology road maps in specialist industry publications. We know what airliners Boeing and Airbus are starting development work on, we can plot the long-term price curve for photovoltaic panels, read the road maps Intel and ARM provide for hardware vendors, and so on. (..)
(..) However, this stuff ignores what Donald Rumsfeld named “the unknown unknowns”. About 1% of the world of ten years hence always seems to have sprung fully-formed from the who-ordered-THAT dimension: we always get landed with stuff nobody foresaw or could possibly have anticipated, unless they were spectacularly lucky guessers or had access to amazing hallucinogens. And this 1% fraction of unknown unknowns regularly derails near-future predictions.”
Legibility is a “seeing like a state” term, James Scott terminology but it sticks lately. Scott Alexander on the experts, journalism and legibility. In different terms, like Nate Silver put it, the surpirsing gap between what you read in the news about Covid and what you could gather yourself from preprints and experts’ twitter threads
this is not what I wanted to write, I got carried away by the legibility concept which is probably misappropriated and used outside its intended reach. Anyway I wanted to say really, reality ius mostly sticky and the part that sticks from a decade to the next moves in ways you can guess with a proper knowledge strategy. )0% stays the same, )% changes in this way, 1% can’t be easily guessed and that probably chages the meaning of all the rest
a new demonstration of #quantumsupremacy via #bosonSampling, i.e. solving a calculation problem made with a quantum computer in a time impossible for a traditional computer
It is not therefore a question of having demonstrated a universal, foolproof, scalable or simply useful computer. But the Boson Sampling approach could be useful to go in that direction
The demonstration was made in China, for all the answers, for what we can understand, I refer you to Scott Aaronson who is the referee of the report.
Funny Story as told by Scott:
When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there”
some Nobel prize was awarded 50 years ago to someone who claimed a protein’s shape could be derived from the atoms building it, so it started a rece at guessing how a protein would fold, folding proteins in short. It came to artificial intelligence news this week, Alpha Fold of Deepmin won some protein-folding olympics https://www.nature.com/articles/d41586-020-03348-4
while reading PIHKAL di Shulgin you can find a chapter named the 4-posiyion where it shows how halucinogenic potency derives from molecules taking the 4- position in a benzene circlae and stay there while in our body
As cosmic rays strike the atmosphere, the collisions create pions that decay into muons that more often spin one way than the other because of a fundamental asymmetry in the laws of nature. These asymmetric muons, raining down, mutate more right-handed biomolecules…
this is a lot #meta and also a but #GAC (italian, unnervingly obvious)
112 papers were randomly chosen to be shared on twitter by a group with ~58k followers or to not be shared. Papers that were tweeted accumulated 4x more citations compared to non-tweeted papers over 1yr.
Meta you know, a randomised trial of paper surely describing randomised experiments
GAC because it’s the network baby, read Barabasi’s link e you know that if you look for a job and tell family and frineds you get nothing, but if tell people outside your usual creche you will find. So tell a 58,000-strong Twitter group.
Barabasi went on writing precisely a book to explain the infallible formula of success, the book is titled
An amazing *randomized trial* on Twitter+academia:
112 papers were randomly chosen to be shared on twitter by a group with ~58k followers or to not be shared. Papers that were tweeted accumulated 4x more citations compared to non-tweeted papers over 1yr.https://t.co/82doGP0khapic.twitter.com/6qHECDQbtq
gore edition, new cases and deaths, divergence in growth as case grows and experience is built on how to treat critical patients of a novel disease
But in a “learning curve” framework that works you need to be able to cope with demand, your service should not run into some sort of “diminishing return” at the margin or in a more discrete case, being able to serve the customers in order to have the very same outcomes that you are measutring gains against
In other words, ICU’s might have overflown, critical patients might have gone untreated, mortality might have spiked as a consequence, nd thereafter kept a steeper profile
“without models, there are no data. I’m not talking about the difference between “raw” and “cooked” data. I mean this literally. Today, no collection of signals or observations—even from satellites, which can “see” the whole planet—becomes global in time and space without first passing through a series of data models.”
“Paul N. Edwards – A Vast Machine_ Computer Models, Climate Data, and the Politics of Global Warming”
” A reanalysis project involves reprocessing observational data spanning an extended historical period using a consistent modern analysis system, to produce a dataset that can be used for meteorological and climatological studies.” https://en.wikipedia.org/wiki/Atmospheric_reanalysis
“You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes (..) But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do.”
Pellagra ravaged some areas of Europe were corn was staple long after it was imported from the Americas. The Europeans did not import the right method to prepare it which would enrich it of vitamin E and did not develop a remedy for centuries. Maybe water rats would have done better