source: http://rebooting.ai

“Rebooting AI” notes-4

Alisher Abdulkhaev

--

(a series of posts for each chapter)

<<< Previous post (“Rebooting AI” notes-3)

>>> Next post (“Rebooting AI” notes-5)

In this post I would like to list some quotes from the book which I found them either important or interesting. (Note: some parts of the quotes are paraphrased by myself and some of them are my own insights)

4. If Computers Are So Smart, How Come They Can’t Read?

In medicine, ~7000 academic papers are published every day. No doctor or researcher can possibly read them all. Drug discovery gets delayed in part because lots of information is locked up in the literature that nobody has time to read. New treatments sometimes don’t get applied because doctors don’t have time to read and discover them. AI programs that could automatically synthesize the vast medical literature would be a true revolution (recently a new paper showed, in Nature journal, up claiming to propose a drug discovery with the help of deep learning algorithms: https://www.nature.com/articles/s41587-019-0224-x)

Someday, when machines really can read, understand and make a reasoning well, our descendants will wonder how we ever got by without synthetic/artificial readers, just as we wonder how earlier generations managed to live without electricity. However, even though the current AI systems can answer questions about particular text, it can not give reasonable answer when the answer is not spelled out directly in a phrase in the indexed text. Actually this would need a ability of reasoning, which is not yet achieved by AI systems.

Consider this simple dialogue between a doctor and a patient:

— DOCTOR: Do you get chest pain with any sort of exertion?

— PATIENT: Well I was cutting the yard last week and I felt like an elephant was sitting on me [Pointing to chest].

Any person can easily get the situation; the answer to the doctor’s question was actually a “yes”. Cutting a yard is in the category of exertion. Elephant is heavy, thus, we could conclude that patient had a much pain. We also automatically infer that the word “felt” is being used figuratively rather than literally, given the amount of damage an actual elephant would inflict.

Just think how we leverage our previous knowledge to understand this “relatively simple” dialogue. However, to a machine this may not be that much trivial stuff to “fully” understand the conversation.

In the language of cognitive psychology, what you do when you read any text is to build up a cognitive model of the meaning of what the text is saying. This can be as simple as compiling what Daniel Kahneman and the late Anne Treisman called an object file — a record of an individual object and its properties — or as complex as a complete understanding of a complicated scenario.

So far, the field of natural language understanding has fallen between two stools: one (deep learning) is fabulous at learning but poor at compositionality and the construction of cognitive models; the other (classical AI) incorporates compositionality and the construction of cognitive models, but is mediocre at best at learning. And both are missing the main thing — common sense!

--

--

Alisher Abdulkhaev

Machine Learning Engineer @ Browzzin & board member of Machine Learning Tokyo: https://medium.com/@mltai