the Bitter Lesson of Rich Sutton says that progress in 70 years of Ai has come from general methods relying on computation power which has increased from Moore Law. Any method based on developing specific knowledge of the field has failed, while “brute force”
the Better Lesson of Rodney Brooks retorts that Moore Law might yield doubling every 20 years, Dennard scaling (energy invariance) broke down in 2006 so there is not so much computing power to waste and reserach is very active on network architectures and custom-designed chips.
In a way both are right, speaking of different levels of generalizations. Sutton:
“In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.