source: http://rebooting.ai

“Rebooting AI” notes-6

Alisher Abdulkhaev

--

(a series of posts for each chapter)

<<< Previous post (“Rebooting AI” notes-5)

In this post I would like to list some quotes from the book which I found them either important or interesting. (Note: some parts of the quotes are paraphrased by myself and some of them are my own insights)

6. Insights from the Human Mind

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. (MARVIN MINSKY, THE SOCIETY OF MIND)

“There is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: seeing a color works differently than planning a vacation, which works differently than understanding a sentense, moving a limb, remembering a fact, or feeling an emotion” said Chaz Firestone — a Yale cognitive scientist. No one equation is ever going to capture the diversity of what human minds manage to do.

Computers don’ts have to work in the same ways as people. There is no need for them to make the many cognitive errors that impair human thought, such as confirmation bias (ignoring data that runs against your prior theories), or to mirror the many limitations of the human ind, such as difficulty that human beings have in memorizing a list of more than about seven items… There is much to be learned from how human mind works.

Here are eleven clues, drawn from the cognitive sciences — psychology, linguistics, and philosophy, to be offered that we think are critical:

There are no silver bullets: Deep Learning is largely falling into the same trap, lending fresh mathematics to a perspective on the world that is still largely about optimizing reward, without thinking about what else needs to go into a system to achieve what we have been calling deep learning. But if the study of neuroscience has taught us anything, it’s that the brain is enormously complex, often described as the most complex system in the known universe. The average human brain has roughly 86 billion neurons, of hundred if not thousands of different types; trillions of synapses; and hundred of distinct proteins within each individual synapse. There are also more than 150 distinctly identifiable brain areas, and a vast and intricate web of connections between them. Truly intelligent and flexible systems are likely to be full of complexity, much like brains.

Cognition makes extensive use of internal representations: book review written in 1959 by Noam Chomsky has killed the behaviorism. Chomsky’s critique revolved around the question of whether human language could be understood strictly in terms of a history of what happened in the external environment surrounding the individual (what people said, and what sort of reactions they received), or whether it was important to understand the internal mental structure of the individual. In his conclusion, Chomsky heavily emphasized the idea that “we recognize a new item as a sentence, not because it is generated by the grammar that each individual has somehow and in some form internalized”. Only by understanding this internal grammar would we have any hope of grasping how a child learned language. In its place, a new field emerged, called cognitive psychology. Where behaviorism tried to explain behavior entirely on the basis of external reward history, cognitive psychology focused largely on internal representations, like beliefs, desires, and goals. Brown University machine learning expert Stuart Geman said that “The fundamental challenges in neural modeling are about representations rather than learning per se”.

Abstraction and generalization play an essential role in cognition: much of what we know is fairly abstract. For instance, the relation “X is the sister of Y” holds between many different pairs of people; we don’t jsut know that a particular pair of people are sisters, we know that sisters are in general, and can apply that knowledge to individuals. The representations that underlie both cognitive models and common sense are all built on a foundation of a rich collection of such abstract relations, combined in complex structures.

Cognitive systems are highly structured: in the bestselling book “Thinking, Fast and Slow”, Nobel laureate Daniel Kahneman divides human cognitive process into two categories, System1 and System2. System1 (fast) process are carried out quickly, often automatically. The human mind just does them; you don’t have any sense of how you are doing it. System2 (slow) process require conscious, step-by-step thought. When System2 engaged, you have an awareness of thinking; working out a puzzle or solving a math problem, etc. Neuroscience paints an even more complex picture, in which hundreds of different areas of the brain each with its own distinct function coalesce in differing pattern to perform any one computation. Everything that we do requires a different subset of our brain resources, and in any given moment, some brain areas will be idle, while others are active. The occipital cortex tends to be active for vision, the cerebellum for motor coordination, and so forth. The brain is a highly structured device, and a large part of our mental prowess comes from using the right neural tools at the right time. We can expect that true AI will likely also be highly structured, with much of their power coming from the capacity to leverage that structure in the right ways at the right time, for a given cognitive challenge.

Even apparently simple aspects of cognition sometimes require multiple tools.

  • Even at a fine-grained scale, cognitive machinery often turns out to be composed not of a single mechanism, but many.
  • Deep Learning is much more likely to be a necessary component of intelligence than to be sufficient for intelligence.
  • A key challenge for AI is to find a comparable balance between mechanisms that capture abstract truths and mechanisms that deal with the gritty world of exceptions.
  • Getting to broad intelligence will require us to bring together many different tools in ways we have yet to discover.

Human thought and language are compositional

  • With a finite brain and finite amount of linguistic data, we manage to create a grammar that allows us to say and understand an infinite range of sentences by constructing larger sentences out of smaller components.
  • People in machine learning have tried to encode words as vectors with the notion that any two similar words in meaning ought to be encoded with similar vectors (vectors with less distance in between them).
  • A technique called word2vec, devised by Ilya Sutskever and Tomas Mikolov, allowed computers to efficiently and quickly come up with word vectors of this sort. Word2vec seems to work for verbal analogies.

A robust understanding of the world requires both top-down and bottom-up information: cognitive psychologists often distinguish between two kinds of knowledge: bottom-up information (that comes directly from our sense) and top-down knowledge (which is our prior knowledge about the world).

  • In image recognition, without context, the pixels on their own make little sense. For instance, an object on the image could be detected correctly if the object has some context around it, however, could fail to recognize the object correctly if there was the object itself and no other contextual information around it.
  • Language tends to be underspecified, which means we don’t say everything we mean; we leave most of it to context, because it would take forever to spell everything out.

Causal relations are a fundamental aspect of understanding the world: a rich understanding of causality is a ubiquitous and indispensable aspect of human cognition.

We keep track of individual people and things: Deep Learning is focused around categories, not around individuals. Deep learning system can track a person in a video with some accuracy. However, it doesn’t have any deeper sense of the individual behind it. We, however, surely know about the individual knowledge about the people we see in the video (Derek Jeter as an athlete).

Complex cognitive creatures aren’t blank slates: the evidence from biology (developmental psychology and developmental neuroscience) is overwhelming: nature and nurture work together, not in opposition. Most of machine learning people emphasize learning, but fail to consider innate knowledge. But nature and nurture don’t really compete; the richer your starting point, the more you can learn. The real advance in AI, we believe, will start with an understanding of what kinds of knowledge and representations should be built in prior to learning.

--

--

Alisher Abdulkhaev

Machine Learning Engineer @ Browzzin & board member of Machine Learning Tokyo: https://medium.com/@mltai