________
A developed brain regularly generates ideas outside of the direct interests of the host.
That only happens into a human brain, because other animals
have no ideas (and they have no idea about that). For other animals their brain is a black box that receives signals
and generates reactions. Those animals are not aware of the processes happening
in their brains.
Humans is the only animal that can (after a certain training) be aware of the processes
happening in their brains.
No one is ever born with a developed brain. But everyone is born with potentially developed brain. Who eventually gets it developed (and often is called “smart”) depends on how much of a good luck one has.
In us, humans, most of the processes happening in our brain, happen without our knowledge about them (exactly like in all other animals). Unless we deliberately think about something, we are not aware of what our brain is working on. Deliberate thinking is an exclusively human practice (e.g. The Deliberate Thinking v. Digging a Trench & The Importance of Early Exposure to Thinking). As any human practice, it can be trained to different levels of proficiency. As every human practice, it has its own side-effects. One of such side-effects is ideas that come seemingly from nowhere. There are many stories about people who came up with some important or unusual idea in their sleep. There are books on the role of an insight in science or business.
Every idea, every insight is the result of some
processes happening in our brain without our knowledge, but then brought to us
as a given statement – do this! Often,
when this happens we feel excitement – eureka! I got it! A brain uses this
emotion to tell us – pay attention to this statement, it’s important. It feels
like a click – something clicked in our mind, and a switch was flipped from the
state of confusion and frustration into a state of revelation and euphoria (that is why - like any other thing bringing the sense of pleasure - this feeling can lead to an addition, to the desire to have it again and again, and to a depression if that stopped happening).
A note for
all educators: this is what all student value the most – not fun, not a relation to everyday life, not a grade – but
the feeling of excitement that comes together with “I got it! I did it!”. If your
students do not have that feeling – quit the job. Want to be a better teacher? Learn from the best.
Imagine that you were cooking in your sleep. You wake
up, and you see a dish, taste it - yummy! But you have no knowledge about how did it get here, no recollection of making it, and yet it
tastes great! This is what an insight is.
And exactly like in this example with cooking, an
insight needs ingredients. Babies do not have insights because their brain has
no information enough to “cook up” something very new (plus, they are not intelligent enough yet to express themselves in words). Their memory is not filled
yet with sufficient amount of facts (or even fake “facts”). When people grow up, their memory gets
filled with more and more facts (or fakes, perceived as facts).
An insight also can be
about information missing at this time – “this is what I need to figure out!” But
that insight is also based on the information pieces (“atoms” of our knowledge) already existing
in the memory. That is why (just an example, an illustration) asking an anesthesiologist what does he/she think
about a string theory metric tensor is useless (need further explanation? write a comment or send an email). Missing
information - as an empty space in the network of existing connections -
can only be recognized as such only because of the previously existing
connections! When there is only nothing, there is not missing. “Missing” automatically means the existence of other things - the existence of “something”.
An insight –
any insight; every insight – is always based on a new combination of the
information pieces (“atoms” of our knowledge) already (i.e. previously) existing in a memory.
A note for
all educators: that is why only highly experienced teachers can prepare activities and successfully guide students when the task requires from them (students) inventing/discovering something they did not know before. And that is why making
students to work in a group forcing them into solving a problem before they have learned all the
information necessary for solving that problem is also useless; waste of time
and effort; and the source of frustration – with the teacher, with the teaching
process, with the school, and with themselves (a little bit more on “group thinking” in this post).
When we deliberately think about something, trying
to solve a problem, trying to figure out something about something, we set in
motion some processes in our brain that continue to happen even when we take a
break from our thinking. We stopped thinking about that thing, our brain did not. We do not think about it anymore, but our brain does –
without telling us about it. And then, when our brain makes some new
connections that makes sense for it (based on some internal criteria, like the
proximity to what we expected during our period of deliberate thinking), it
lets us know about the result. Click! An insight!
But a similar situation may happen also outside of the
focus of our immediate interests. We have hobbies. We listen to radio, watch TV
shows, read books. All that information accumulates and eventually may result
in an insight that is not related to any deliberate thinking within our
professional field.
This is what I call a side-effect of a functioning of
a developed brain (developed in terms of a large library of facts, and also in terms of the ability to manipulate with relatively large numbers of mental items/elements).
During my professional life, I have collected many of
such side-effects. Many of the posts on this blog are such side-effects.
Sometimes, when I am stuck in a traffic, or swim in a pool, my mind is blank,
but my brain is working and brings me an idea. Like a bubble under water, it
moves to the surface of my consciousness, and grows to the size that does not
fit in the brain anymore. When that happens, I start writing.
Sometimes, I may even have an idea about a specific
device that I think could be useful for some people. But I do not want to
change my profession from a teacher to an inventor. And I also do not want to
let my idea to die in vain. So, I offer it to someone – usually to anyone/everyone.
This is an example.
Many companies are working on the development of self-driving trucks. But they are doing that wrong. Using artificial intelligence to train any truck to drive along any possible route is not intelligent or practical. Most trucks drive along one or two specific routes. And that requires a specifically trained AI designated for one specific route – simpler, cheaper, and much more safe. But in terms of safety, in addition to driving on its own, every truck needs to have a drone capability. For each group of five-six trucks a company needs one remote “driver” who will be live observing the routs and can take over when he sees some difficulty or if an AI expects unexpected and send an alert.
As I wrote in The
Biggest Fakes and Breakthroughs of The Next Decade, since 2004 I
have been reaching out to hundreds of people, including venture capitalists. For them I prepared
two short videos: “Free
business ideas from Dr. Voroshilov: part 1”, and “Free
business ideas from Dr. Voroshilov: part 2”. My latest attempt is described
in Is the Cat
Worth Be Saved? or A Curious Case of a Risky Entrepreneur. That time I sent
an email to MIT Media Lab and asked for a short meeting. Nothing happened. Then
Mr. Ito stepped down and I decided to try my luck again. I sent an email to Prof. Pattie Maes. Her answer was
– the lab does not work with people outside the lab. Even though my whole point
was to try a new, pioneering, practice, as a matchmaker between people who have
an idea but do not want to pursue it (e.g. yours truly), and people who can
pick it up and lead to the development of a device.
In seventeen years of me
living in Boston and reaching all across the U.S., I have not met any person
who would be willing to take a risk to “step out their element”. This YouTube conversation between a smart-not-so-bad gay and an a very-smart-bad guy from the “Billions” captures the essence of the current state of “risk management” on all echelons of American enterprise. Taking a risk demands at the minimum an ability to see alternatives. Arrogance as “I'm so smart I know everything” blinds and the very foundation of risk-taking (the existence of alternatives) goes away. After not taking actual risk for a long time this ability degrades and dies out. No one wants to spend time on assessing the content of a message, everyone assesses the messenger - “if it's shiny, it must be gold”.
“Investing into
what you see right around a corner doesn't require a long vision (or even a long division). An investor is like a person who keeps one foot in the
present (on a stable place) and uses another one to tap around to find the next
stable place to put the foot on it, and then repeats the process.”
In the previous quote, term “investor” describes any
person who thinks about how to invest his/her time to advance his/her personal or professional life. And not one from the hundreds of people who I reached out
in the last 15 years would take a risk.
Of course, the “distance” from an idea, even a brilliant one, to the final product, may be very long. But nothing can happen without an idea. An idea is the seed for an invention, for a new practice. That seed, of course, has to be carefully planted and nourished, and there is no guarantee it will grow up into a beautiful “fruitful tree”. But i can guarantee that if there will be no idea, there will be no“fruitful tree”. Period. And yet, no one wants to invest in an idea anymore. Too risky. It is much safer to look around, find a person who has already demonstrates a proven ability to deliver “success” (money, publications, prestige, ...) and make that person a good offer. This is how all all American businesses currently operate - business businesses, financial businesses, venture businesses, education and science businesses. No wonder the ratio of non-American in American businesses constantly grows - it is much simpler (and cheaper) to buy already ready “business”-men than investing in growing up “domestic innovatros”.
Of course, the “distance” from an idea, even a brilliant one, to the final product, may be very long. But nothing can happen without an idea. An idea is the seed for an invention, for a new practice. That seed, of course, has to be carefully planted and nourished, and there is no guarantee it will grow up into a beautiful “fruitful tree”. But i can guarantee that if there will be no idea, there will be no“fruitful tree”. Period. And yet, no one wants to invest in an idea anymore. Too risky. It is much safer to look around, find a person who has already demonstrates a proven ability to deliver “success” (money, publications, prestige, ...) and make that person a good offer. This is how all all American businesses currently operate - business businesses, financial businesses, venture businesses, education and science businesses. No wonder the ratio of non-American in American businesses constantly grows - it is much simpler (and cheaper) to buy already ready “business”-men than investing in growing up “domestic innovatros”.
Taking risk is not easy. I know that. I took a risk
when I won a Green Card and decided to drop my great professional career and
move to the U.S. – with no money, no English, no network. I believed in myself.
But I also had a very strong incentive – if I stayed, my son would have been drafted
in the Red Army. If that was not a case, I still would move, but I would be
scared more.
My experience demonstrates that America does not
provide anymore incentives to take risk to people who have already achieved some
stable status – in science, in economics, in philanthropy, in government, in
politics. I think this is one of the sources of the overall decline in American
prosperity.
The idea I wanted to offer to the MIT has been brewing
in me since 2009 (ten years!). I knew I would never do anything about it. But I
wanted to hand it to people who could. And I failed. Twice. That is why I decided
to give it away into the open.
Here it is.
A human brain is an amazing
device. If a part of a brain gets damaged, it can rebuild itself in a way that
new parts of a brain may compensate functions that used be performed by the damaged
part (don't be lazy, google it). All it needs is (a) sensory inputs from the same sources that supplied those inputs to the damaged part (or, as a new human organ - even from new type of sources); but
(b) delivered to the healthy parts of a brain (for the purpose of citation - the Voroshilov's Principle of Brian Augmentation).
Simple!
Simple!
Let’s say a person is
blind. Video signals can be acquired and processed using a camera and an
interface. That interface may be local, or may be wirelessly connected to a mainframe
computer. In any case, that interface transform video signals and delivers them
to a sensory patch attached to a large portion of a skin (e.g. on a back; but theoretically, can be anywhere, even inside of a body). The patch
induces sensations in the skin via a large number of point-sensors acting on a
skin at many different points. A point-sensor may use an electric signal (using
variable potential difference), or a pressure-signal (using small
electromagnets with a moving needle-like core). Of course, sensors/cameras/microphones can register and transform inputs from the spectrum outside of the regular human range (e.g. ultraviolet, ultrasonic, heat-sensor/infrared-registrant, artificial “nose”, i.e. molecular registrant). Coupled with brain-reading and brain-influencing techniques we get a complete brain-augmenting technology.
With the right
technological solution, and specifically designed training (this would be my
field of expertise, especially when the experiments move from mice/cats/dogs/dolphins/monkeys to humans), a blind person will eventually develop a sensation similar
to vision – of course, in a very rudimental form, but even that is better than nothing at all (do not believe? let's bet on it).
The same approach can be
used to train solders or astronauts to “see” what they could not see otherwise
(an actual functioning “third eye” to see outside of the visible spectrum or behind them).
The same approach can be used
to develop, re-develop, or enhance human hearing, sniffing.
I would expect DARPA would
be interested in this project, but my past attempts to reach out to DARPA also
failed. Which is not assuring, considering that DARPA supposed to lead
America in taking risks.
Good luck!
Dr.
Valentin Voroshilov
The one that placed the first man on the Moon?
Future over-hyped technologies like “AI” with its various applications, augmented reality, controlled fusion, space travel, or else, also will not be able to give a significant advantage to one country over others (probably, for good). But there is one technology that can do that - the technology for unlocking human creativity en masse. This technology does not yet exist, but possible. And no country is working on it, just yet. The one that starts the first will have all the advantage.
For more information about the project, visit laptop.media.mit.edu.
1.
An idea that represents an expansion of already well-known practice. Scaling up
an activity that is already present. An example is the GroundTruth project. There
are places in America without local news – let’s install there a reporter.
2.
A modification of existing practice. For example, instead of lecturing
switching to a “flipped classroom” model.
3.
Transferring existing practice from one technological platform to another. For
example, modification of teaching using the Internet (transition from no
technology to the use of technology), a combination of WWW and teaching; MOOCs (only
the first MOOC was an exception and fell into the 5th category).
Another example is transitioning from using coding algorithms to so-called AI.
4.
Some combination of 1, 2 and 3.
5.
A brand new original idea. It has no roots, no history, it does not grow from any
previous project, it is basically based on an insight. Hence, there are no
experts who could really assess the idea. “Experts” would divide between “this
is just crazy”, and “I cannot say it will work, but I cannot say it will not”.
The decision to support or not is based on a gut feeling, risk-taking ability,
and a personal attitude toward the applicant(s). A project is often based on an
idea of combining two already existing practices in a new non-existing yet. An
example is x.com – an online
bank co-funded by Elon Musk, i.e. the combination of banking and WWW. In the
current environment such projects have no chance to get a support. Thetolerance for risk-taking is zero.
P.S.
Which country was able to achieve a total
world domination?
The one that developed the first atomic
bomb?
No.
The one that developed the first hydrogen
bomb?
No.
The one that placed the first man in
cosmos?
No.
The one that placed the first man on the Moon?
No.
Future over-hyped technologies like “AI” with its various applications, augmented reality, controlled fusion, space travel, or else, also will not be able to give a significant advantage to one country over others (probably, for good). But there is one technology that can do that - the technology for unlocking human creativity en masse. This technology does not yet exist, but possible. And no country is working on it, just yet. The one that starts the first will have all the advantage.
Clearly, no one at DARPA (and all other
places I tried to reach out) has ever read “Noise Level” by
Raymond F. Jones (1952!).
Note: this post is a part of the
series:
China v. The U.S.: The Battle Of Strategic Thinking
China v. The U.S.: The Battle Of Strategic Thinking
NOTE: A piece of history: http://news.mit.edu/2005/laptops-1005
Does anyone remember today this highly over-hyped project?
Try this link!
Note: this page
provides links to some YouTube videos on different matters (most of my videos are my lectures, but some are on politics or whatever comes to mind).
_____
Well, you could just scroll down to find this version of the monitor.
My two cents in the discussion about virtual education.
I wrote a lot about education, including the distant education.
For example, check
The future of education is impossible without a robust online component.
I want you to know what I did last summer!
Getting ready for the fall semester? Here are some hints.
The future of education is impossible without a robust online component.
I want you to know what I did last summer!
Getting ready for the fall semester? Here are some hints.
More on this page.
Here I want to point out at the useless but very active discussion how to effectively use Zoom for teaching.
Here I want to point out at the useless but very active discussion how to effectively use Zoom for teaching.
The answer is - you CANNOT effectively use Zoom for teaching.
Zoom, Skype, WebEx, Google Meet, Microsoft Teams, or any other meeting software will never be good for teaching.
Zoom, Skype, WebEx, Google Meet, Microsoft Teams, or any other meeting software will never be good for teaching.
Of course, to understand and accept that, one needs to know what teaching is and is about.
In American culture, including the top educators, researchers and administrators teaching is not different from animal training, from training circus animals doing tricks.
BTW: one of the reasons for No sign for improving math education soon.
If teaching would have been pouring knowledge from a "knowledge storage" (a.k.a. a teacher) into an empty vessel (a.k.a a student) then Zoom would be sufficient. But teaching is not that.
E.g.:
Teaching is the process of helping learners to learn. And learning is based on communication. If one-on-one communication would have been possible, then, again, Zoom would be fine. But that is not a case. Teaching requires an effective group communication. That requires a an ability to organize, manage and monitor communication between students. That requires s completely different technological instrument.
A teacher needs to be able to see and not just all students, but the work of every (any!) student (and of course communicate with any student). And a teacher needs to be able to create and re-create collaborative groups and observe the group work and participate in that work. And this is just the bare minimum any teaching collaborative technology must do. Ideally, students should feel immersed in the same learning environment, and that means - use virtual reality. The need to do laboratory experiments brings even more demands to an effective distant teaching-and-leaning technology.
To my best knowledge, there is no company or a startup trying to develop that technology.
Hence, distant teaching sucks, and will continue to suck for years ahead.
No comments:
Post a Comment