“Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself.”
Reminds me of Hofstadter example with the sort of billiard balls, how does he call it?
Detusch rather talks of emergent phenomenon, we can usefully do thermodynamic calculations on a pot of boiling water but if we wnated to he reductionists we should do impossible calculations on the dynamics of bubbles and the result would be the samme.
In the lesswrong post the hypothesis is developed in relation to AI and its “alignment”, I still do not grasp the full extent of this concept, mostly theoretical and detached from reality
“the natural abstraction hypothesis would dramatically simplify AI and AI alignment in particular. It would mean that a wide variety of cognitive architectures will reliably learn approximately-the-same concepts as humans use, and that these concepts can be precisely and unambiguously specified”
“If true, the natural abstraction hypothesis provides a framework for translating between high-level human concepts, low-level physical systems, and high-level concepts used by non-human systems.”