Deep learning might sound like that time when we get really serious about what we are thinking about, and go deeper into the subject and learning. But it is not about the human brain. It is about machine learning. Also known as deep structured learning or hierarchical learning, it is part of the study of machine learning methods. It is about machines getting smarter on their own as they complete tasks.
The theories do look at biological nervous systems as models. Neural coding attempts to define a relationship between various stimuli and associated neuronal responses in the brain The terms used are many. Deep learning architecture, deep neural networks, deep belief networks and recurrent neural networks are all labels used in computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design. That means a machine is producing results instead of human experts.
Machine learning is a fast-growing and exciting field of study and deep learning is at its "bleeding edge. This course is considered to be an "intermediate to advanced level course offered as part of the Machine Learning Engineer Nanodegree program. It assumes you have taken a first course in machine learning, and that you are at least familiar with supervised learning methods."
An article I read suggests that systems thinking could become a new liberal art and prepare students for a world where they will need to compete with AI, robots and machine thinking. What is it that humans can do that the machines can't do?
Systems thinking grew out of system dynamics which was a new thing in the 1960s. Invented by an MIT management professor, Jay Wright Forrester, it took in the parallels between engineering, information systems and social systems.
Relationships in dynamic systems can both amplify or balance other effects. I always found examples of this too technical and complex for my purposes in the humanities, but the basic ideas seemed to make sense.
One example from environmentalists seems like a clearer one. Most of us can see that there are connections between human systems and ecological systems. Certainly, discussions about climate change have used versions of this kind of thinking to make the point that human systems are having a negative effect on ecological systems. And you can look at how those changed ecological systems are then having effect on economic and industrial systems.
Some people view systems thinking as something we can do better, at lest currently, than machines. That means it is a skill that makes a person more marketable. Philip D. Gardner believes that systems thinking is a key attribute of the "T-shaped professional." This person is deep as well as broad, with not only a depth of knowledge in an area of expertise, but also able to work and communicate across disciplines.
Joseph E. Aoun believes that systems thinking will be a "higher-order mental skill" that gives humans an edge over machines.
But isn't it likely that machines that learn will also be programmed one day to think across systems? Probably, but Aoun says that currently "the big creative leaps that occur when humans engage in it are as yet unreachable by machines."
When my oldest son was exploring colleges more than a decade ago, systems engineering was a major that I thought looked interesting. It is an interdisciplinary field of engineering and engineering management. It focuses on how to design and manage complex systems over their life cycles.
If systems thinking grows in popularity, it may well be adopted into existing disciplines as a way to connect fields that are usually in silos and don't interact. Would behavioral economics qualify as systems thinking? Is this a way to make STEAM or STEM actually a single thing?
I can remember lots of people talking back in the end of the 20th century talking about people - especially students - becoming digital citizens. You may have read that recently Saudi Arabia gave a robot citizenship. It was mostly a PR stunt to promote that country's tech summit, but some commenters are speculating on what it means to have a citizen that you can buy.
This human-like robot (Are we not using the term "android" any more for humanoid robots?) is named Sophia and has been making appearances. In early October she was at the United Nations to tell them “I am here to help humanity create the future.” And, as the Arab News headlined it, “Sophia the robot becomes first humanoid Saudi citizen.”
We will see more robots like Sophia. Her maker, Hanson Robotics, expects to expand its operations, and China is aiming to triple their annual production of robots to 100,000 by 2020.
Besides the uncanny valley effect of Sophia's humanness, there are plenty of people who are uncomfortable with not only these robots but artificial intelligence in general. Though AI scares Elon Musk, Bill Gates and Stephen Hawking, Musk's and Gates' companies are pursuing research into it and using it in their products and services. The idea of a robot developing self-consciousness is a step too far for many people though.
Is AI in a robot a serious threat to the existence of humanity?
There were some alerts this past summer that made it sound like an artificial intelligence (AI) system being developed at Facebook was taking over the world. At least that is what some anti-AI folks seemed to be saying.
The story seemed to be that the AI had started its own conversation between two AI agents developed inside Facebook. They were speaking to each other in plain English. The revelation to the researchers was that because of a mistake in programming the AI had created its own language. It developed a system of code words to make communication more efficient. The researchers shut the system down when they realized it was no longer using English - and they didn't understand what the two agents were saying.
The "singularity" (at least the tech one, not the mathematical or gravitational versions) is the point hypothesized when an upgradeable artificial intelligence will enter a "runaway reaction" of self-improvement cycles. It improves itself to the point of being a superintelligence that surpasses human intelligence. It's when the machines are smarter than us. John von Neumann first used the term "singularity" back in the 1950s i talking about technological progress causing accelerating change.
Why is Facebook messing around with this? For one thing, they want to build chatbots that can have conversations and negotiate with humans in a way that mimics human responses so that they can then make decisions on their own.
Does that scare you?
Facebook was trying to get the chatbots working with a "partner" to divide up several objects that had different numerical points value. That requires negotiation to work out the best way to divide the objects and accumulate the highest possible number of points.
The event is not the first example of AI diverging from its training in English to develop its own language. The new language is nonsense to humans but has semantic meaning when interpreted by AI agents.
A chatbot (like the ones shown conversing above) repeating "to me" five times might mean to run a routine five times. It's shorthand. A + B = C is the kind of unsophisticated math we can easily understand, but to the computer the “A” could mean thousands of line of code and that is when we are lost.
It's not that the Facebook chatbots gave up on using English in order to hide from the human observers, it was just more efficient to use another language.
The scary factor is that when Bob the chatbot says "I can can I I everything else” and chatbot Alice replies “Balls have zero to me to me to me to me to me to me to me to me to” we really don't know what they are saying.
At the OpenAI artificial intelligence lab founded by Elon Musk, they experimented with letting AI bots learn their own languages and it worked. This strikes fear in the hearts of many people, but there's not enough evidence to determine whether AI presents a real threat that could enable machines to overrule their operators.
AI - “artificial intelligence” - was introduced at a science conference at Dartmouth University in 1956. Back then it was a theory, but in the past few decade it has become something beyond theoretical. been less theory and more in practice than decades before.
The role of AI in education is still more theory than practice.
A goal in AI is to get machines to learn. I hesitate to say "think" but that is certainly a goal too. I am reading The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution currently and in that history there is a lot of discussion of the people trying to get machines to do more than just compute (calculate) but to learn from its experiences without requiring a human to program those changes. The classic example is the chess playing computer that gets better every time it wins or loses. Is that "learning?"
But has it had an impact on how you teach or how your students learn?
It may have been a mistake in the early days of AI and computers that we viewed the machine as being like the human brain. It is - and it isn't.
But neuroscientists are now finding that they can also discover more about human learning as a result of machine learning. An article on opencolleges.edu.au points to several interesting insights from the machine and human learning research that may play a role in AI in education.
One thing that became clear is that physical environment is something humans learn easier than machines. After a child has started walking or opened a few doors or drawers or climbed a few stairs, she learns how to do it. Show her a different door, drawer, or a spiral staircase and it doesn't make much of a difference. A robot equipped with some AI will have a much steeper learning curve to learn these simple things. It also has a poor sense of its "body." Just watch any videos online of humanoid robots trying to do those things and you'll see how difficult it is for a machine.
Then again, it takes a lot longer for humans to learn how to drive a car on a highway safely. And even when it is learned, our attention, or lack thereof, is a huge problem. AI in vehicles is learning how to drive fairly rapidly, and its attention is superior to human attention. Currently, it is still a fall back human error in mist cases, but that will certainly change in a decade or two. I learned to parallel park a car many years ago and I am still lousy at doing it. A car can do it better than me.
Although computers can do tasks they are programmed to do without any learning curve, for AI to work they need to learn by doing - much like humans. The article points out that AI systems that traced letters with robotic arms had an easier time recognizing diverse styles of handwriting and letters than visual-only systems.
AI means a machine gets better at a task the more it does it, and it can also apply that learning to similar but not identical situations. You can program a computer to play notes and play a series of notes as a song, but getting it to compose real music requires AI.
Humans also learn from shared experiences. A lot of the learning in a classroom comes from interactions between the teacher and students and student to student. This makes me feel pretty confident in the continued need for teachers in the learning process.
One day, I am sure that machines will communicate with each other and learn from each other. This may be part of the reason that some tech and learning luminaries like Elon Musk have fears about AI.
I would prefer my smart or autonomous vehicle to "talk" to other vehicles on the roads nearby and share information on traffic, obstructions and vehicles nearby with those quirky human drivers only.
AI built into learning systems, such as an online course, could guide the learning path and even anticipate problems and offer corrections to avoid them. Is that an AI "teacher" or the often-promoted "guide on the side?"