Recently, between a daughter burbling about the chemical transferability of learning and a character (in my new novel) pushing the boundaries of artificial intelligence, I’ve been thinking a lot about learning.
As a layperson, I’ve thought of learning mainly in terms of younger learning – generational transfer of social convention and disciplinary knowledge, which, in environments that adequately balance boundaries and freedom, and nourishment and stress, develop capacity for higher-order connections, analogy, abstraction, and “extra-order” creativity – and adult learning which uses and further develops the higher-order capacities. All of this presumes that learning is a humanistic and social process. Of course, our physiology allows this learning, but, until recently, I didn’t seriously consider the possibility that learning might be, fundamentally, a physiological and/or logical process. Today, however, with explorations of the chemical transferability of learning, increasing understanding of the structure and activity of the brain, and the development of sophisticated algorithms for pattern recognition and unsupervised learning by machines, I, along with many others, am fascinated by, and curious about, the chemical and mechanical nature of learning.
Snap! Palpably, truly, my brain shut a mental trap around the word “curiosity.” Aha. Where would curiosity figure in the chemical transfer of learning? Can artificially intelligent curiosity match human curiosity? What is the relationship of curiosity to learning and intelligence? So far explorations of the chemical transfer of learning are restricted to conditioned learning, so the question of curiosity is very far from arising. But I think we can ask about curiosity in machine learning.
Curiosity in mammals includes both instrumental, problem-solving, motivated curiosity and (pleasurably) idle curiosity. Both kinds can lead to learning. The first sounds amenable to algorithmic machine learning. The second, pleasurably idle curiosity, sounds fundamentally inconsistent with algorithmic processes, but certainly one could code a machine to simulate idle curiosity. One could have an “idle” curiosity algorithm, with a cosmetic repertoire of pleasure indicators, linked to a mechanical random number generator, and one could code for recording and learning from the effects of the random, if not truly idle, curiosity. I think it could look pretty good and the machine might even derive quantitatively and qualitatively better learning than I do from my idle curiosity.
So where would that leave me? At the limit, the machine cannot have idle curiosity. That does leave me (and you) with a fertile question: what then can the machine never have in terms of learning? (I think one could have a strongly analogous line of inquiry about self-consciousness. Perhaps in a future post. Or in a guest post?)
Note: In this note, I use “learning” and “intelligence” to denote mainly mental activity (descriptive, analytical, creative, etc.). I am not looking in any primary way at “muscle memory,” emotional intelligence, etc., though, clearly, all of these greatly affect any person’s overall capacity and process of learning and “intelligence.” I also do not look at a crucial form of curiosity among mammals in general, but, most strikingly, among humans – relational curiosity.
Related content: a fascinating article on gender and AI/Robots.