"Go there don't know where,
and find that don't know what!" (in the picture)
The copy of this post
has been sent to a professional magazine (naturally, the magazine version of this piece was written in a formal style and had no pictures or personal assessments).
The response was short but positive - in a way.
10-Jan-2018
Dear Dr. Voroshilov:
We enjoyed your letter,
but the board declines to publish it because they thought it would be too
controversial.
Sincerely,
Makes me feel satisfied. I take "controversial" as a compliment. It combines "original", "unexpected", and "interesting".
On a definition of AI
Prologue
Ask an AI professional:
“What is your definition of Artificial Intelligence?”
At first you will hear a
description of many functions and abilities of intelligent beings (us).
If you keep insisting:
“No, I don’t need a description of it, I want a definition”;
the best answer you get is … (“I’ll keep you in suspense”, but if you want to
know the answer immediately, just scroll down the page).
Definitions represent
the skeleton of a science (any science).
If a research field does
not have clear and operating definitions for all the fundamental terms it uses,
it is not yet a true science, at the best, it is a science in making.
Giving a good definition is very important, and not always easy. Take, for
example, a famous tale about Plato and Diogenes, which says that “when Plato gave the
tongue-in-cheek definition of man as "featherless bipeds,"
Diogenes plucked a chicken and brought it into Plato's Academy, saying, "Behold! I've brought you a man"” (https://en.wikipedia.org/wiki/Diogenes).
Nowadays, stories about
new AI achievements are everywhere. But what is AI? What is a definition of it?
The article “Artificial
Intelligence” in Encyclopedia Britannica is composed of about 8000 words. It
can be divided in three major parts.
The first part (the
shortest one) simply says that artificial intelligence is like human
intelligence but artificially manufactured.
The second part is a
shorter version of the article about human Intelligence.
The third part describes
various technical approaches to constricting AI.
The clearest and actual
"definition" of AI is provided in the first part, i.e. AI is artificially
manufactured system which can do what HI (human intelligence) can. However, this is more of the description of the scientific field than the definition of specific property of a system.
I think the best
approach is to define any artificial object in terms of its natural/biological counterpart
("an artificial arm" for example, is "an artificially made
arm"), because it fits the Occam's razor criterion. So, AI is an artificially made system which possesses such property called "intelligence".
That leads us to search
for a definition of intelligence, in general (1. HI, or human intelligence, represents one possible - biological
- realization of intelligence; 2. "artificially made" also may have stretched interpretations, e.g. when doctors take a sperm and place it in an egg, will the baby made "artificially", where is the line between "naturally" and "artificially"?).
The article “Human
Intelligence” in Encyclopedia Britannica is composed of about 9000 words. It
describes various approach to understanding what intelligence is, its aspects,
elements, properties, manifestations.
But this article does
not give a clear definition of intelligence.
Here are the first two
sentences from the article, quote:
“Human intelligence,
mental quality that consists of the abilities to learn from experience, adapt
to new situations, understand and handle abstract concepts, and use knowledge
to manipulate one’s environment.
Much of the excitement
among investigators in the field of intelligence derives from their attempts to
determine exactly what intelligence is.”
The article provides a
short description for many various attempts, but does not offer one description
of what intelligence is, which would dominate the field.
Without having a formal
definition of AI, searching for AI would be like “go there don’t know where,
and find that don’t know what” (this is what Tsar said to Andrei the Solder,
according to a famous Russian tale).
So, every researcher in
the field of AI development has some definition of AI (because, clearly, they
know "that, what they want to find").
But without having one
commonly accepted definition of intelligence (or anything else, for that
matter) every researcher who is trying to construct AI (or anything else, for
that matter) can base the attempts on the description which fits the best his
or her own views.
Of course, the majority
of actors in the field base their actions on something they all have in
common (that common part defines the field).
And that common part which defines the field of AI is patterns.
Everyone in the field
accepts, as the basis for all R&D activities, the fact that intelligence
does not exist without pattern recognition. But as the result, most
researches in the field of AI shrank "intelligence" to "pattern
recognition", i.e. they simple made intelligence and pattern recognition equal:
"intelligence is pattern recognition".
From a newer piece on he matter:
BTW: it happens every time when the process of the development of a definition of an object is based on listing the set of attributes the object has or doesn't have ("men are featherless bipeds").
BTW: it happens every time when the process of the development of a definition of an object is based on listing the set of attributes the object has or doesn't have ("men are featherless bipeds").
Of course, deep inside
they all know that this approach is wrong (or at least insufficient), but
without having a definition of intelligence, that is the best they can do.
And we all know that
intelligence (at least HI) is more than just an ability to recognize or produce
patterns, because animals also can recognize patters. Even more, animal world
provides many examples of complicated pattern development (e.g. bees, termites,
beavers, spiders).
As an expert and a professional in HI, I have been
searching for a simple, clear, workable, operational definition of intelligence
(BTW: another question about definitions - what is the difference between an “expert”
and a “professional”?).
In 2017 this search has
finally come to an end.
This is my definition of
intelligence:
Intelligence is the property of a system/host; the mission, the reason for its existence, and
the core ability of intelligence is creating/designing solutions to problems which have
never been solved before (by
that system/host).
© Valentin Voroshilov, 2017
**
Although, term "designing" implies a specific mental work called "thinking", i.e. deliberate manipulation with different mental objects, in May of 2019 I added to this definition and explicit statement: ...
(2) the criterion of intelligence (intelligent activity) is an ability to describe the created solution using symbols (audio, visual, textual, gestures).
So, my full definition:
This definition imposes a very strict limits on the structure of the actions required for the development of intelligence ("a baby Einstein in a jungle"), but AI people have no idea how that structure is structured.
Creating a solution and solving a problem are two different actions, but not many people can see the difference (as described in this post).
A mouse can find a way through a new maze. But a mouse cannot do it by staring at a picture of the maze, and definitely cannot describe the solution to anyone else.
**
All other aspects of intelligence (heavily discussed in literature, and artfully presented in Encyclopedia Britannica) play their roles, and take their places as devices, components, abilities, organs, functions required for intelligence to exist, perform, and achieve its goals, fulfill its mission - creating, again and again, a solution to a problem which has never been solved before.
Although, term "designing" implies a specific mental work called "thinking", i.e. deliberate manipulation with different mental objects, in May of 2019 I added to this definition and explicit statement: ...
(2) the criterion of intelligence (intelligent activity) is an ability to describe the created solution using symbols (audio, visual, textual, gestures).
So, my full definition:
This definition imposes a very strict limits on the structure of the actions required for the development of intelligence ("a baby Einstein in a jungle"), but AI people have no idea how that structure is structured.
Creating a solution and solving a problem are two different actions, but not many people can see the difference (as described in this post).
A mouse can find a way through a new maze. But a mouse cannot do it by staring at a picture of the maze, and definitely cannot describe the solution to anyone else.
**
All other aspects of intelligence (heavily discussed in literature, and artfully presented in Encyclopedia Britannica) play their roles, and take their places as devices, components, abilities, organs, functions required for intelligence to exist, perform, and achieve its goals, fulfill its mission - creating, again and again, a solution to a problem which has never been solved before.
One can hear or read again and again that "intelligence is an ability to solve problems". But this statement is not operational, at least it is not differentiative enough to be a definition. Digging trenches solves problems. Or anyone can name numerous of other activities which solve problems but do not require intelligence (of the highest level, which is assumed when people say "intelligence"). Or a person (or even an animal) can be drilled how to solve a specific problem when that problem will arise - also no intelligence is involved. My definition is operational and differentiative; it allows to separate intelligent activities (of the highest level) from activities acquired via drilling practices (including infamous "rote memorization").
BTW: I have a clear vision of how AI can be used to study and improve learning and teaching practices on a large scale (and how education practices can advance AI development). In particular, I have developed a specific strategy for using advances in AI to developing a new type of content knowledge measuring instruments in physics, mathematics, and chemistry. Based on my experience of teaching problem-solving and knowledge of how mind learns, I also envision a specific strategy which will lead to the development of AI capable of solving physics problems, potentially even win a physics competition, and then be capable of becoming an artificial physics teacher (no the best one, but better than many current ones). When one creates a solution to a physics problem one has not solved in the past, his/her reasoning process follows the steps a scientists uses when uncovering laws of nature. DARPA wants to support research leading to development of AI Research Assistant. Creating AI which can solve physics problems and then teach how to solve physics problems is the natural first step in that direction.
Someone might ask, how come that professionals in the field could not come up with an operational definition of intelligence, but a physics teacher could?BTW: I have a clear vision of how AI can be used to study and improve learning and teaching practices on a large scale (and how education practices can advance AI development). In particular, I have developed a specific strategy for using advances in AI to developing a new type of content knowledge measuring instruments in physics, mathematics, and chemistry. Based on my experience of teaching problem-solving and knowledge of how mind learns, I also envision a specific strategy which will lead to the development of AI capable of solving physics problems, potentially even win a physics competition, and then be capable of becoming an artificial physics teacher (no the best one, but better than many current ones). When one creates a solution to a physics problem one has not solved in the past, his/her reasoning process follows the steps a scientists uses when uncovering laws of nature. DARPA wants to support research leading to development of AI Research Assistant. Creating AI which can solve physics problems and then teach how to solve physics problems is the natural first step in that direction.
Firstly, I am not just any physics teacher. I am a very good physics teacher who for a long period of time has been successfully using his own natural human intelligence. For example, this is an excerpt from one of many student evaluations: “I hated physics before taking this course, and now after taking both 105 and 106 with Mr. V, I actually really enjoy it. He is one of the best teachers I've ever had. Thank you” (ten more pages on this link :) ).
Why am I good a teaching physics?
Because: (1) I know patterns needed for creating solutions to physics problems (and problems in general); (2) I know patterns needed for learning how to create solutions to physics problems; (3) I know patterns needed for teaching how to create solutions to physics problems; (4) I am good at employing those patterns in my teaching practice.
Secondly,
This is what teachers can do! From the NASA's "Brief History of Rockets"
https://www.grc.nasa.gov/www/k-12/TRC/Rockets/history_of_rockets.html
https://www.grc.nasa.gov/www/k-12/TRC/Rockets/history_of_rockets.html
“In 1898, a Russian schoolteacher, Konstantin Tsiolkovsky (1857-1935), proposed the idea of space exploration by rocket. In a report he published in 1903, Tsiolkovsky suggested the use of liquid propellants for rockets in order to achieve greater range. Tsiolkovsky stated that the speed and range of a rocket were limited only by the exhaust velocity of escaping gases. For his ideas, careful research, and great vision, Tsiolkovsky has been called the father of modern astronautics.”
And thirdly, for business-minded people, remember that if it wasn't for Steve Wozniak, the world would most probably never knew Steve Jobs.
I would also like to use this opportunity to say a couple of words about the definition of "Machine Learning".
There are many definitions of "Machine Learning".
For example, “Machine learning is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to "learn" (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed.”
None of those "definitions" are definitions. They merely describe the scientific field of machine learning but do not define machine learning. Mostly because they do not define "learning".
If they did, the rest would be obvious.
This is the definition of Machine Learning: "machine learning it is what people do when they learn, but this time it is done by an artificially made object, i.e. by a machine". After a short general introduction most of the authors offer some version of a brief description of specific methods used to process different patterns (could be found in any textbook on AI or ML).
The true goal is to define "learning".
"Learning" also has many definitions. The most common one (which comes in various forms) is "learning is the process of the acquisition of knowledge", or "the knowledge obtained during the processes of learning". Both definitions are correct, in their way, because they do describe learning. But that type of learning is not the type AI professionals have in mind when they say "machine learning". Those two definitions do not allow to establish if learning has actually occurred beyond mere memorization (a.k.a. "acquisition of knowledge"). Machine learning as a memorization is clearly of no interest, because these days we all know very well that machines can accumulate, store and retrieve huge amount of information. Of course, the algorithms, techniques for acquiring and processing that information represent important technical part of machine learning.But that part has little to do with the actual process called "learning". Even blind and deaf people can learn to the highest level (up to getting PhD).
As an expert in human intelligence, I define "learning" (more accurately, "productive learning") as a processes leading to a production of knowledge; as the first approximation (the scientific thinking in action), learning is a process of utilization of currently active knowledge in order to produce new knowledge (for example, the statement "I learned how to do it" represents some of the new knowledge developed during learning). The criterion of "learning" ("actual learning", "real learning", "true learning", "productive learning") is the ability to use existing knowledge to generate knowledge previously not available to the actor of learning. Machine learning is happening when a machine (an artificially manufactured object) produces new knowledge based on the knowledge currently available to the machine.
BTW: what is "knowledge"? Without an operational definition of "knowledge" how do we know if the new knowledge has been produced? If a machine takes a text and randomly permute and recombine letters, words, sentences will it be "new knowledge"? More importantly, what types of knowledge exist? how does knowledge evolve? what is the structure of knowledge? how is the structure of knowledge reflected in the structure of neural network processing that knowledge? People in AI don't seem interested in those questions. At least there is no single page from 1100 pages of "Artificial Intelligence: A modern Approach" (by Stuart J. Russell, Peter Norvig; 3d edition) where those questions about knowledge would be discussed. They talk about "knowledge" as if it is something obvious, or define "knowledge" as "information", which is a severe simplification, in part because it ignores an important feature of "knowledge" - it has a vector; it is purposeful (in general). They define learning as making a match between a hypothetical knowledge and the factual knowledge (meaning "information"). This does formally describe a procedure leading to "new knowledge": (1) state a hypothesis; (2) gather facts; (3) compare; (4) decide. For example: (1) this is a banana? (2) run image recognition; (3) correlation 0.98; (4) ye, that is a banana! (if needed, e.g. to decrease % of mistakes, learning can be "reinforced", and "deepened"). But (A) for people true learning usually begins after learning how to recognize various shapes; (B) this learning ignores "learning as a skill development"; (C) and also it ignores the central feature of learning - its intentionality (humans have a desire to learn, including about themselves, built into the genetic code; good teaching is based on that; bad teaching ignores or even tries to break this desire).
Finally, since the ultimate mission of learning is progress:
(1) acquisition of knowledge is useless if it does not lead to the development of new practice (starting from the development of new individual skills).
(2) the development of new practice (starting from the development of new individual skills) always lead beyond acquisition of knowledge to development of new knowledge.
That means that AI developers also need to define "skills", "new skills", "machine skills", "skill development", etc., in a way assessable for a machine and by a machine.
Machine learning is happening when a machine (an artificially manufactured object) develops new skill based on the skills currently available to the machine.
Back to the main topic.
There are many definitions of "Machine Learning".
For example, “Machine learning is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to "learn" (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed.”
None of those "definitions" are definitions. They merely describe the scientific field of machine learning but do not define machine learning. Mostly because they do not define "learning".
If they did, the rest would be obvious.
This is the definition of Machine Learning: "machine learning it is what people do when they learn, but this time it is done by an artificially made object, i.e. by a machine". After a short general introduction most of the authors offer some version of a brief description of specific methods used to process different patterns (could be found in any textbook on AI or ML).
The true goal is to define "learning".
"Learning" also has many definitions. The most common one (which comes in various forms) is "learning is the process of the acquisition of knowledge", or "the knowledge obtained during the processes of learning". Both definitions are correct, in their way, because they do describe learning. But that type of learning is not the type AI professionals have in mind when they say "machine learning". Those two definitions do not allow to establish if learning has actually occurred beyond mere memorization (a.k.a. "acquisition of knowledge"). Machine learning as a memorization is clearly of no interest, because these days we all know very well that machines can accumulate, store and retrieve huge amount of information. Of course, the algorithms, techniques for acquiring and processing that information represent important technical part of machine learning.But that part has little to do with the actual process called "learning". Even blind and deaf people can learn to the highest level (up to getting PhD).
As an expert in human intelligence, I define "learning" (more accurately, "productive learning") as a processes leading to a production of knowledge; as the first approximation (the scientific thinking in action), learning is a process of utilization of currently active knowledge in order to produce new knowledge (for example, the statement "I learned how to do it" represents some of the new knowledge developed during learning). The criterion of "learning" ("actual learning", "real learning", "true learning", "productive learning") is the ability to use existing knowledge to generate knowledge previously not available to the actor of learning. Machine learning is happening when a machine (an artificially manufactured object) produces new knowledge based on the knowledge currently available to the machine.
BTW: what is "knowledge"? Without an operational definition of "knowledge" how do we know if the new knowledge has been produced? If a machine takes a text and randomly permute and recombine letters, words, sentences will it be "new knowledge"? More importantly, what types of knowledge exist? how does knowledge evolve? what is the structure of knowledge? how is the structure of knowledge reflected in the structure of neural network processing that knowledge? People in AI don't seem interested in those questions. At least there is no single page from 1100 pages of "Artificial Intelligence: A modern Approach" (by Stuart J. Russell, Peter Norvig; 3d edition) where those questions about knowledge would be discussed. They talk about "knowledge" as if it is something obvious, or define "knowledge" as "information", which is a severe simplification, in part because it ignores an important feature of "knowledge" - it has a vector; it is purposeful (in general). They define learning as making a match between a hypothetical knowledge and the factual knowledge (meaning "information"). This does formally describe a procedure leading to "new knowledge": (1) state a hypothesis; (2) gather facts; (3) compare; (4) decide. For example: (1) this is a banana? (2) run image recognition; (3) correlation 0.98; (4) ye, that is a banana! (if needed, e.g. to decrease % of mistakes, learning can be "reinforced", and "deepened"). But (A) for people true learning usually begins after learning how to recognize various shapes; (B) this learning ignores "learning as a skill development"; (C) and also it ignores the central feature of learning - its intentionality (humans have a desire to learn, including about themselves, built into the genetic code; good teaching is based on that; bad teaching ignores or even tries to break this desire).
Finally, since the ultimate mission of learning is progress:
(1) acquisition of knowledge is useless if it does not lead to the development of new practice (starting from the development of new individual skills).
(2) the development of new practice (starting from the development of new individual skills) always lead beyond acquisition of knowledge to development of new knowledge.
That means that AI developers also need to define "skills", "new skills", "machine skills", "skill development", etc., in a way assessable for a machine and by a machine.
Machine learning is happening when a machine (an artificially manufactured object) develops new skill based on the skills currently available to the machine.
Back to the main topic.
When the host of
intelligence (e.g. a human person) creates a solution to a problem the host has
never solved before (or has no memory of that) but that problem has been solved
in the past by other host(s), the intelligence plays only local role – for that
host only.
But when the host of
intelligence (e.g. a human person) creates a solution to a problem NO host has
ever solved before, the result has a global value – for the whole assembly of
hosts (e.g. human society).
BTW: (a) this definition
of AI should be sufficient to design the Turing test (which will be possible
only under assumption that the machine will not be able to lie).
(b) teaching creativity
(a.k.a. critical thinking, creative thinking, lateral thinking, inventiveness)
is “simply” teaching students (including artificially made) how to create solutions to problems they have
never solved before, i.e. teaching students how to be intelligent (what I have
been successfully doing for many years).
Naturally, my definition
of AI is based on a subset of definitions, for example, on a specific view on
what a problem is, what does it mean “to solve a problem”, and much more (that
is why I have been intensely publishing on this blog).
In this piece, I only want to present the difference between a problem and a task:
1. When someone needs to
achieve a goal, and knows what actions to perform in order to achieve it, it is
not a problem it is a task.
2. When someone needs to
achieve a goal, and does NOT know what actions to perform in order to achieve
it, that IS a problem.
The definitions above
represent the simplest description of “a task” and “a
problem”, but already can be used as the means for differentiating intelligent
actions from routine actions.
There is one more
question, the answer to which affects the whole discussion: “What is a
scientific definition of “a scientific definition”?”.
I like to ask my
students a short version of it: “What is a definition of “a definition”?”, and
it always makes them think hard, and generates a discussion. Everyone is welcome to
join this discussion (BTW: this discussion is essential, crucial for the final
choice of the actual definition).
So, what is the meaning
of AI – as a symbol (abbreviation)?
Well, first and foremost
it is the ultimate goal of the R&D in the field of AI development.
But currently, it is a
brilliant marketing instrument, helping to promote the R&D in the field.
The actual abbreviation should be APRS for an Artificial Pattern Recognition
System, but AI of course is much cooler!
Epilogue
Ask an AI professional:
“What is your definition of Artificial Intelligence?”
At first you will hear a
description of many functions and abilities of intelligent beings (us, people). Basically,
AI is described as an artificial human, which is not an actual definition.
If you keep insisting:
“No, I don’t need a description of it, I want a definition”;
the best answer you get is: “AI is an artificially manufactured pattern
recognition system which can expand/advance/increase/broaden the scope of its
own functions without human interference”.
This definition
accurately describes certain important aspects human intelligence.
However, this definition
also accurately describes animal
behavior.
Everyone who is fine
with being on the same level with animals can keep using that definition.
Otherwise, I suggest
switching to mine.
BTW: there are at least two simple
ways out of this conundrum. First is introducing two definitions:
1. General Intelligence
(GI) is a pattern recognition system which can expand/advance/increase/broaden
the scope of its own functions without interference from other intelligent
systems (incorporates all animals, including humans). It is not clear, though, if the absence of the interference from other intelligent
systems places limits on the level of the development of GI. For example, humans are not capable of development of advanced intelligence without communicating (the form of "interference") with other humans (leave baby Einstein in jungles with monkeys and he will grow up a - very smart! - monkey).
2. Human Intelligence
(HI, or for a broader use – Intelligent Intelligence, or Logical Intelligence,
or Ultimate Intelligence) is the property/feature of a system with the mission,
the reason for its existence, and the core ability of creating solutions to problems which have never been solved before
(by that system).
The exact relationship between GI and HO is not clear for meat this time.
The current meaning of AI becomes the equivalent of AGI (artificial GI), or AHI, which includes HI and another “A”I (animal I).
The current meaning of AI becomes the equivalent of AGI (artificial GI), or AHI, which includes HI and another “A”I (animal I).
There may be a case for a term UAI, The Ultimate AI,
which does not yet exist (but heavily described as “almost here”), and includes HI, AHI, GI
and AGI.
The second approach is also introducing new definitions:
i.e. keep the word "Intelligence" for "an ability to solve problems which have never been solved before (by the host).", but name the animal behavior differently, e.g. "Animal Intelligence", or "Pre-Intelligence", or "Quasi-Intelligence", "Pseudo-Intelligence", "Intelligent Orientation", ... . Of course, there is some overlapping between the "Intelligence" and "Animal Intelligence", some gray area when intelligent species look acting like animals, and animals look acting like intelligent species - that is inevitable - but it does not make the definition less useful.
The second approach is also introducing new definitions:
i.e. keep the word "Intelligence" for "an ability to solve problems which have never been solved before (by the host).", but name the animal behavior differently, e.g. "Animal Intelligence", or "Pre-Intelligence", or "Quasi-Intelligence", "Pseudo-Intelligence", "Intelligent Orientation", ... . Of course, there is some overlapping between the "Intelligence" and "Animal Intelligence", some gray area when intelligent species look acting like animals, and animals look acting like intelligent species - that is inevitable - but it does not make the definition less useful.
After the Epilogue: or The part which has no name
because it goes after the EPILOGUE which by the definition is the last part
of a written piece
The distinct, unique,
crucial, necessary and sufficient attribute, feature, property, expression of
intelligence (HI) is (wait for it) – a DOUBT.
Creating a solution to a
problem which have never been solved before inevitably leads
to some uncertainties, to the situations when there is no (not exists) purely
logical reasoning leading to the answer, to the goal, to the expected result (creating a solution always includes moments of insight).
In this situation an intelligent subject always KNOWS (but first - feels) that this is the time
when the only possible action is to “go with the gut”, “to flip a coin”. The
result of this guess (insight, hypothesis) – “do this” – is based on fluctuations in the neural network of networks
called a brain. This is what no current so called “AI” can do. Current
“AI” has no doubts. It makes the decision (“this is this face”, “this is this
word”, “this is this …”) based on the training it had. The better its training
was, the less mistakes it makes (e.g. looking at a banana and seeing a face).
But the current “AI” never doubts its choices. Currently an HI
(human intelligence) needs to interfere to check AI's decisions (if only HI
was always smarter than “AI”: http://www.cognisity.how/2018/02/Facebook.html).
Until an artificial
brain learns how to process fluctuations in its network, artificial
intelligence will not be an actual intelligence but merely an
efficient pattern recognition device.
And even in the such developed field as visual patterns recognition current "AI" makes silly mistakes. Let's say "AI" is trained to recognize a banana. It cannot see the difference between an actual banana and the picture of a banana. Because that would require to understand perspective vision (and more). A coder can try to write a mathematical model for that. But in order to learn the difference between a picture of an object and the object AI needs to do what children do when they learn the world; it needs to see, walk, and touch things, and to learn the correlations between actions and timing of the feelings. This is way ahead of today.
When current AI recognizes a pattern (visual, audio, numerical) it only makes a statement in the form "yes - that is that thing", "no - that is not that thing". But the processes in the network which lead to the final statement also have their own patterns. In a brain, there is at least one another network of a higher level which analyzes and recognizes the patterns happening in the lower network making a decision about the pattern/object. That higher level network generates another signal - a doubt - "are your sure"? And then there is another network which makes another decision "yes, I am sure" or "no, I am not sure". And then ... - long story, but you see the pattern.
No AI is even close to mirror this type of pattern/pattern/pattern recognition (that requires developing the hierarchy of networks analyzing the hierarchy of patterns).
That is why I also added another post on the matter: "Relax, the real AI is not coming any soon" (that post also has some insights on what "common sense" is).
P.S. The field of AI training will become much more important than it is today. Although, not many AI professionals see it so far.
Appendix I: a conversation with a professional
When current AI recognizes a pattern (visual, audio, numerical) it only makes a statement in the form "yes - that is that thing", "no - that is not that thing". But the processes in the network which lead to the final statement also have their own patterns. In a brain, there is at least one another network of a higher level which analyzes and recognizes the patterns happening in the lower network making a decision about the pattern/object. That higher level network generates another signal - a doubt - "are your sure"? And then there is another network which makes another decision "yes, I am sure" or "no, I am not sure". And then ... - long story, but you see the pattern.
No AI is even close to mirror this type of pattern/pattern/pattern recognition (that requires developing the hierarchy of networks analyzing the hierarchy of patterns).
That is why I also added another post on the matter: "Relax, the real AI is not coming any soon" (that post also has some insights on what "common sense" is).
P.S. The field of AI training will become much more important than it is today. Although, not many AI professionals see it so far.
Appendix I: a conversation with a professional
Recently I was informed
about a 2007 paper, containing the survey of various definitions of
Intelligence (https://arxiv.org/abs/0712.3329). The list
is very impressive. I found two which could be seen as a definition, but both
are relatively similar to mine.
"Intelligence ...
is an ability ... to solve new problems" // W.V. Bingham. Aptitudes and
aptitude testing. Harper & Brothers, New York, 1937.
I would say my
definitions includes this one, but make a more specific statement, which makes it more operational.
Another one was "Intelligence ...
is an ability ... to achieve goals" (belongs to the two authors of the
paper).
I
would argue, that essentially that translates into my definition, with the
goal "to achieve the solution constructed to a problem which have never
been solved before”, which makes my definition clearer and more operational.
I sent an email,
in which I describe my view. In an email back I was pointed at the importance of "being able to solve problems
in various environments (solving wide range of problems,
achieving wide range of goals)".
I
responded that I would not consider an additional description, namely:
"wide range” - as an important part of the definition of
intelligence, due to the following reasons:
1.
from my point of view that is implied and obvious.
2. a definition of something, including Intelligence, should be concise, sufficient on its own, without the need for additional explanation of a possible interpretation.
3. the host of intelligence does NOT have to use it wide, the definition should allow to observe (measure, assess) one individual and make a conclusion if the host has or doesn’t have Intelligence (e.g. Turing tests).
4. Intelligence should not depend on a specific field of action; the property/ability/feature called “Intelligence” should be “field-independent”, which makes it “field-universal”, meaning, if it works in one field, it will work any any/every field. The ability to create solutions to problems which have never been solved before is exactly of that type.
2. a definition of something, including Intelligence, should be concise, sufficient on its own, without the need for additional explanation of a possible interpretation.
3. the host of intelligence does NOT have to use it wide, the definition should allow to observe (measure, assess) one individual and make a conclusion if the host has or doesn’t have Intelligence (e.g. Turing tests).
4. Intelligence should not depend on a specific field of action; the property/ability/feature called “Intelligence” should be “field-independent”, which makes it “field-universal”, meaning, if it works in one field, it will work any any/every field. The ability to create solutions to problems which have never been solved before is exactly of that type.
"Achieving
a goal" (in any practical or theoretical field) when you KNOW how to
achieve it is very much different from a situation when you
DON'T know how to achieve it and have to develop/design the solution
(procedure, protocol, device); that ability is the central core of HI, or “I”
in general (please, refer again to my differentiation between "a
problem" and "a task").
Speaking about the definition of AI, my view is that, no matter
what definition of Intelligence is used, Machine “I”, or AI, it always means
the same - Intelligence developed artificially. There is no need for a special
definition (like artificial arm is an artificially made arm). It may make sense
for internal use between AI developers, but for general public, practitioners,
educators “A” just literally means "Artificial". Although, that would
require a discussion about the meaning of word "definition",
including what is its purpose (and history, e.g. what Aristotle meant by a “definition”,
etc.).
In conclusion I made a point that if a
commonly accepted definition of Intelligence existed, it
would be presented in the corresponded article in Encyclopedia Britannica.
Since that is not a case, the question is still open, and the discussion
remains vital.
NB: This response of mine effectively concluded our communication; since then I have not heard back a word. As an expert in Human Intelligence, which includes human psychology, I know the reason for having our communication severed. I shook the ego of the authors of the paper. They had a nice construct of what they called "Intelligence", but some guy from the streets, someone with no name, no recognition, poked and made a big hole in that construct. So, they did what most people do in this situation, they pretended that nothing happened. Of course, those people are not idiots. In their minds they continue to mull over our conversation, their argument, my counterarguments. Eventually they will come up with their new definition of Intelligence, one which will have something from their old definition but will also have crucial elements of mine, and they will think that they came up with this new definition completely on their own (or another idea from any of my posts). Which is fine. All this AI stuff for me is just a hobby on a side (at least for now). Once in a while I just like poking a sleeping bear and see what happened. So far 100 % of my expectations turned out to be correct.
NB: This response of mine effectively concluded our communication; since then I have not heard back a word. As an expert in Human Intelligence, which includes human psychology, I know the reason for having our communication severed. I shook the ego of the authors of the paper. They had a nice construct of what they called "Intelligence", but some guy from the streets, someone with no name, no recognition, poked and made a big hole in that construct. So, they did what most people do in this situation, they pretended that nothing happened. Of course, those people are not idiots. In their minds they continue to mull over our conversation, their argument, my counterarguments. Eventually they will come up with their new definition of Intelligence, one which will have something from their old definition but will also have crucial elements of mine, and they will think that they came up with this new definition completely on their own (or another idea from any of my posts). Which is fine. All this AI stuff for me is just a hobby on a side (at least for now). Once in a while I just like poking a sleeping bear and see what happened. So far 100 % of my expectations turned out to be correct.
Appendix II: another conversation with another professional (with a slight touch)
Dear
Valentin,
I'm not so so keen on definitions. You know the old challenge, can
you define a game?
Best.
Dear
Dr. …,
thank you very much for your note.
I follow the General Theory of Human Activity:
* science is one of human practices;
* as such it evolves, has phases and stages; and levels;
* the direction of evolution of science does not depend on the
actual field;
* there is a stage when people in the field do not have commonly
accepted definitions;
* there is a stage (the higher one) when people in the field have
developed commonly accepted definitions;
* this transition is inevitable and unavoidable;
* and, of course, in every science, there are terms which cannot
be defined (the root terms), but it does not mean nothing can be defined, on
the contrary, everything which can be defined needs to be defined;
* and if something (mass, charge, game, intelligence) has been
defined, it does not mean that definition will not evolve in the future;
* one of the goals of a scientific methodology is to separate the
categories in the field as definable and non-definable.
Since within any linguistic
system, including science, not all terms are definable, classifying terms as
definable or undefinable is the part of the job of the scientists in the
field. What to do with undefinable terms (and maybe "game" is
one of such) is also the part of the discussion. But the existence of
undefinable terms logically does not
lead to the non-existence of definitions (and I believe my definition
of intelligence is a definition).
At a certain point it all goes down to personal beliefs (I have
more on this matter here: http://www.cognisity.how/2017/12/religion.html).
On your challenge, for me personally, a game is:
1. human activity (or in general, intelligent activity)
2. the participant or participants can choose to participate or
not in that activity without damaging consequences
3. the participant or participants chose to follow specific and
the same rules
4. there is a specific rule or rules (a criterion) which describe
when the game is finished and what is the result
5. after the game the participants can return essentially to the
pre-game state, i.e. the game does not have a drastic effect on the participants’ life (clearly, word “drastic”
give a wide leeway for interpretation, but this statement works at least as the first
iteration).
From my view, this is not yet the final definition, because these conditions are necessary, but not sufficiently sufficient; but it grasps the essence of what a game is.
From my view, this is not yet the final definition, because these conditions are necessary, but not sufficiently sufficient; but it grasps the essence of what a game is.
I would be happy to have coffee with you some time, if you have
such an opportunity.
Sincerely,
Valentin
BTW: so far no coffee
P.S. After the letter was sent I came up with this version of the definition: a game is (1) a pretended life (in that what people call "life" they would not do "it"); or (2) a life pretended to look like a game (in that what people call "life" they want do "it" but do not want to show that they want to do it).
Well, I was not the first one to venture a similar sentiment: "Life is a theater".
Interesting fact: one can replace word "art" in the first quote with basically anything ("teaching", "managing", researching", ...) and it still will stand!
P.P.S. After I have developed my own definition of game, and its shortest version (the sentence in blue) I, naturally, looked it up online. All sources are basically say that a game is a play or a competition, or ..., and there is a list of possible activities, which is technically not a definition, but an analogy.
Appendix III: on the general structure of a problem-solving process
P.S. After the letter was sent I came up with this version of the definition: a game is (1) a pretended life (in that what people call "life" they would not do "it"); or (2) a life pretended to look like a game (in that what people call "life" they want do "it" but do not want to show that they want to do it).
Well, I was not the first one to venture a similar sentiment: "Life is a theater".
Interesting fact: one can replace word "art" in the first quote with basically anything ("teaching", "managing", researching", ...) and it still will stand!
P.P.S. After I have developed my own definition of game, and its shortest version (the sentence in blue) I, naturally, looked it up online. All sources are basically say that a game is a play or a competition, or ..., and there is a list of possible activities, which is technically not a definition, but an analogy.
Appendix III: on the general structure of a problem-solving process
The general structure of
a problem-solving process, or PSP (i.e. the process required to solve a
problem; more specifically, the process required to create a solution to a problem),
does NOT depend on the problem; in particular, it does not depend on the field
to which the problem belongs.
That means that (1) one
needs to learn how to design the PSP in one field. The BEST field to do
that would be physics (here is why: (A) a text, http://www.cognisity.how/2016/12/learnphy.html;
(B) slides, http://www.gomars.xyz/1717.html,
slides 59-61 point at a relationship between cyber thinking and thinking; which in greater details is described in "How much of "cyber" in "cyberthinking"?");
then (2) one
needs to learn how to transfer that skill to solve problems in another field
(does not matter which one),
and (3) after
that the one will be able to transfer that problem solving skill
(PSS) to ANY field.
The described three stages represent
the fundamental basis for
teaching with establishing reliable transfer of knowledge (this approach is not
presented in literature, but essentially based on the Vygotsky theory of a Zone
of Proximal Development; e.g. at https://psyjournals.ru/en/kip/2016/n3/zaretskii.shtml;
more at http://psyjournals.ru/en/search/?q=zaretskii;).
Specific structure of
PSP in physics is described in http://www.cognisity.how/2018/02/Algorithm.html.
For specific thought
process in physics: http://www.cognisity.how/2018/02/thinkphy.html
Appendix IV
On Wednesday, 02/14/2018,
I was listening live a Congressional hearing on AI (https://oversight.house.gov/hearing/game-changers-artificial-intelligence-part/).
Everyone who has a slightest interest in AI should do it, too. I would like to point at only three (of many) interesting moments.
Everyone who has a slightest interest in AI should do it, too. I would like to point at only three (of many) interesting moments.
1. Despite one of the
first the stated goals of the hearing (clarify what is AI), no one of the four
panelists offered a clear definition, except saying “AI is what we see in the
futuristic movies” (meaning, basically, devices acting like people). I would
like to have a discussion about my definition of AI (which as an artificially manufacture
system which can create solutions to problems the system has never solved before).
2. When asked when AI
could exhibit reasoning abilities similar to human, all four panelists offered
numbers between 20 and 30 years from now. Which makes a perfect sense to me. If
they said "fifty" congressmen could start thinking "well, if it
so far ahead, what's the all fuss, we have more pressing matters to finance?".
But they just could not say "ten" because they all knew (and all in
the field know, and they knew they know) that "ten years from now" is
just not realistic, not believable (and lying to the Congress is bad – at least
according to movies).
3. When asked about the
areas where AI can bring significant advances, NONE (!) of the participants
named education. Clearly “big fish”
in AI don’t have education on the list of their priorities (didn’t pop up in
their mind), or at least as a potential funding generating field. That is
despite the fact that the training procedures they use to “teach” AI, such as “supervised
learning” and “reinforcement learning”, are just simplest teaching approaches –
way before, say, John Dewey’s Constructivism. The reason behind this fact is
very simple – current AI does NOT require any complicated teaching strategy,
current AI is not really smarter than a dog (can recognize a face, a voice, a command),
well, very fast thinking dog. And since it will not be requiring such a strategy
for at least twenty years, why even bother? This is one of the reasons that all
my attempts to reach out to AI professionals failed. And this is one of the reasons
for me to start an open search for collaborators interested into merging advance in AI with
education.
Appendix V
I am one of those professionals who the President of M.I.T. Prof. Rafael Reif calls "bilinguals". M.I.T. designates $1 billion to create a new college with the sole goal to study AI. In the past I called for creating an institution which would concentrate on the use of AI in education. I would hope the new M.I.T. college will have such a lab.
Here is our brief communication.
Thank you for your note and your interest in the Schwarzman College.
We are in the very early days of the College, with a search for a new dean just getting started. The dean, of course, will be critical in shaping the College's educational programs and opportunities. I encourage you to reach out to the dean once he or she is named.
Warmly,
Appendix V
I am one of those professionals who the President of M.I.T. Prof. Rafael Reif calls "bilinguals". M.I.T. designates $1 billion to create a new college with the sole goal to study AI. In the past I called for creating an institution which would concentrate on the use of AI in education. I would hope the new M.I.T. college will have such a lab.
Here is our brief communication.
To: Prof. L. Rafael Reif,
The President of M.I.T.
Massachusetts
Institute of Technology
77
Massachusetts Avenue
Room
3-208
Cambridge,
MA 02139-4307
Dear President Reif,
I am one of those bilinguals
who, according to you, will be transforming the future.
I am a theoretical physicist by
trade, a physics instructor and a writer by my profession, and an expert on
human intelligence.
I would like to be a part of
the laboratory “AI for education” of the new M.I.T. Stephen A. Schwarzman
College of Computing.
I personally would like to be
involved into the research and development of AI named “a perfect physics
student”, i.e. and AI which can solve any problem from any regular physics
textbook.; and AI named “a perfect physics teacher”, i.e. AI which can explain
to a regular person how to solve any problem from any regular physics textbook,
and guide a student through the problem-solving process better than an average
physics teacher does it these days.
I am a very good physics
teacher (I have a solid proof of that at: http://www.gomars.xyz/vvcvres.html).
I
also have a very good knowledge of how AI works, or should work. I have
published some pieces on the matter, including one giving an operational
definition of AI: the latest piece is available at http://www.cognisity.how/2018/05/AHLI.html; the
original definition is presented at http://www.cognisity.how/2017/12/AIdef.html.
In
fact, the latter piece had been reviewed in a professional peer reviewed
magazine with the following response, quote:
“10-Jan-2018
Dear Dr. Voroshilov:
We enjoyed your letter, but
the board declines to publish it because they thought it would be too
controversial.
Sincerely,
Editor, Journal of
Experimental & Theoretical Artificial Intelligence”.
I
think, “too controversial” may be exactly
what the new College may embrace.
Of course, the strategy which
will be used for AI in physics education can be used for math education, and
for other science subjects.
I also have an experience in
educational consulting and teacher professional development and have a clear
vision for specific projects merging advances in AI with advancing the practice
of education in general.
I am confident with all the
stages of the development of AI applications besides coding, but coding is the
least important part of the whole process (in fact, any good physicists can
become a good coder, but the reversed statement is wrong: http://www.cognisity.how/2017/12/cyber.html).
I hope to hear from a
relevant person with whom I could discuss the details of my
prospective/possible professional involvement with the College.
Thank you in advance,
Sincerely,
Dr. Valentin Voroshilov
__________
Dear Dr. Voroshilov,
Thank you for your note and your interest in the Schwarzman College.
We are in the very early days of the College, with a search for a new dean just getting started. The dean, of course, will be critical in shaping the College's educational programs and opportunities. I encourage you to reach out to the dean once he or she is named.
Warmly,
Rafael Reif
__________
Dear President Reif,
First, I want to thank you
for your response, I did not expect it, and I appreciate it.
I also want to thank your
assistant who screened your mail/email, and note that you clearly have a good
understanding of people because you hired as your assistant a person who sees
the difference between empty letters and a letter which may be of your
interest.
I appreciate your input and
your advice, I will follow it when the dean will be named.
But I hope that when that
happens, you may mention to the dean this brief communication.
I would like to use this,
maybe final, opportunity to communicate with you to add to my first letter only
one new point.
Recent DARPA pre-solicitation
#
DARPA-PA-18-02-02, titled “the
Artificial Intelligence Research Associate (AIRA) program” “invites submissions of innovative basic research proposals to
address two main objectives: 1) explore and develop novel new AI algorithms and
approaches for discovery of scientific laws and governing equations for
complex physical phenomena”.
I
would like to point at the fact that the
reasoning process one uses when constructing a solution to a physics problem
one has no experience of solving in the past has close similarities to the
reasoning process one uses “for discovery of scientific laws and governing equations
for complex physical phenomena”.
Creating AI which can
solve physics problems is a natural step toward creating AI Research Associate.
Since I am not in the
field of AI, I have no chance to get any funding from DARPA.
Boston
University researches do not have a solid representation in the field of AI.
That
is why I started searching for a group of established researches who would be
interested in my projects “AI as perfect physics student”, and “AI as perfect
physics teacher” (described in my first letter).
Naturally,
M.I.T. was my first choice.
Sincerely,
Dr. Valentin Voroshilov
BU, Physics Department
"I hated
physics before taking this course, and now after taking both 105 and 106 with
Mr. V, I actually really enjoy it. He is one of the best teachers I've ever
had. Thank you".
For more on AI:
Relax! The Real AI is Not Coming Any Soon!
http://www.cognisity.how/2018/05/AHLI.html
Who Will Train our Artificial Puppies?
http://www.cognisity.how/2018/04/aipuppies.html
The Dawn of The New AI Era.
http://www.cognisity.how/2018/05/AHLI.html
Who Will Train our Artificial Puppies?
http://www.cognisity.how/2018/04/aipuppies.html
The Dawn of The New AI Era.
Will Artificial Intelligence Save, Replace or
even Affect Education Practices? (a venture capitalist’s view)
What does an educator
need to know about a brain?
Is Artificial
Intelligence Already Actual Intelligence?
And this link is to the post about how to teach students to make them ready to the AI-era
To learn more about my professional experience:
The voices of my students
"The Backpack Full of Cash": pointing at a problem, not offering a solution
Essentials of Teaching Science
Dear Visitor, please, feel free to use the buttons below to share your feelings (ANY!) about this post to your Twitter of Facebook followers.
The voices of my students
"The Backpack Full of Cash": pointing at a problem, not offering a solution
Essentials of Teaching Science
Dear Visitor, please, feel free to use the buttons below to share your feelings (ANY!) about this post to your Twitter of Facebook followers.
in fact, AI helps manufacturer in boosting their production, the AI itself implemented in manufacturing system
ReplyDeleteAI is very helpful especially in productivity. but it costs a lot, eliminating jobs for humans
ReplyDeleteGundam Online Store