Strong and Weak AI

programming
Image by Gerd Altmann from Pixabay

Ask several people to define artificial intelligence (AI) and you'll get several different definitions. If some of them are tech people and the others are just regular folks, the definitions will vary even more. Some might say that it means human-like robots. You might get the answer that it is the digital assistant on their countertop or inside their mobile device.

One way of differentiating AI that I don't often hear is by the two categories of weak AI and strong AI.

Weak AI (also known as “Narrow AI”) simulates intelligence. These technologies use algorithms and programmed responses and generally are made for a specific task. When you ask a device to turn on a light or what time it is or to find a channel on your TV, you're using weak AI. The device or software isn't doing any kind of "thinking" though the response might seem to be smart (as in many tasks on a smartphone). You are much more likely to encounter weak AI in your daily life.

Strong AI is closer to mimicking the human brain. At this point, we could say that strong AI is “thinking” and "learning" but I would keep those terms in quotation marks. Those definitions of strong AI might also include some discussion of technology that learns and grows over time which brings us to machine learning (ML), which I would consider a subset of AI.

ML algorithms are becoming more sophisticated and it might excite or frighten you as a user that they are getting to the point where they are learning and executing based on the data around them. This is called "unsupervised ML." That means that the AI does not need to be explicitly programmed. In the sci-fi nightmare scenario, the AI no longer needs humans. Of course that is not even close to true today as the AI requires humans to set up the programming, supply the hardware and its power. I don't fear the AI takeover in the near future.

But strong AI and ML can go through huge amounts of data that it is connected to and find useful patterns. Some of those are patterns and connections that itis unlikely that a human would find. Recently, you may have heard of the attempts to use AI to find a coronavirus vaccine. AI can do very tedious, data-heavy and time-intensive tasks in a much faster timeframe.

If you consider what your new smarter car is doing when it analyzes the road ahead, the lane lines, objects, your speed, the distance to the car ahead and hundreds or thousands of other factors, you see AI at work. Some of that is simpler weak AI, but more and more it is becoming stronger. Consider all the work being done on autonomous vehicles over the past two decades, much of which has found its way into vehicles that still have drivers.

Of course, cybersecurity and privacy become key issues when data is shared. You may feel more comfortable in allowing your thermostat to learn your habits or your car to learn about how you drive and where you drive than you are about letting the government know that same data. Discover the level of data we share online dong financial operations or even just our visiting sites, making purchases and our search history, and you'll find the level of paranoia rising. I may not know who you are reading this article, but I suspect someone else knows and is more interested in knowing than me.

Event-Based Internet

Event-based Internet is going to be something you will hear more about this year. Though I had heard the term used, the first real application of it that I experienced was a game. But don't think this is all about fun and games. Look online and you will find examples of event-based Internet biosurveillance and event-based Internet robot teleoperation systems and other very sophisticated uses, especially connected to the Internet of Things (IoT).

HQWhat did more than a million people do this past Sunday night at 9pm ET? They tuned in on their mobile devices to HQ Trivia, a game show, on their phones.  

For a few generations that have become used to time-shifting their viewing, this real-time game is a switch. 

The HQ app has had early issues in scaling to the big numbers with game delays, video lag and times when the game just had to be rebooted. But it already has at least one imitator called "The Q" which looks almost identical in design, and imitation is supposed to be a form of flattery.

This 12-question trivia quiz has money prizes. Usually, the prize is $2000, but sometimes it jumps to $10 or $20K. But since there are multiple survivors of the 12 questions that win, the prizes are often less than $25 each.

Still, I see the show's potential (Is it actually a "show?") Business model? Sponsors, commercial breaks, sponsors and product placement in the questions, answers and banter in-between questions.

The bigger trend here is that this is a return to TV "appointment viewing."  Advertisers like that and it only really occurs these days with sports, some news and award shows. (HQ pulled in its first audience of more than a million Sunday during the Golden Globe Awards, so...) 

And is there some education connection in all this?  Event-based Internet, like its TV equivalent, is engaging. Could it bring back "The Disconnected" learner?  

I found a NASA report on "Lessons Learned from Real-Time, Event-Based Internet Science Communications."  This report is focused on sharing science activities in real-time in order to involve and engage students and the public about science.

Event-based distributed systems are being used in areas such as enterprise management, information dissemination, finance,
environmental monitoring and geo-spatial systems.

Education has been "event-based" for hundreds of years. But learners have been time-shifting learning via distance education and especially via online learning for only a few decades. Event-based learning sounds a bit like hybrid or blended learning. But one difference is that learners are probably not going to tune in and be engaged with just a live lecture. Will it take a real event and maybe even gamification to get live learning? 

In all my years teaching online, I have never been able to have all of a course's student attend a "live" session either because of time zone differences, work schedules or perhaps content that just wasn't compelling enough.

What will "Event-based Learning" look like?

Edge Computing

I learned about edge computing a few years ago. It is a method of getting the most from data in a computing system by performing the data processing at the "edge" of the network. The edge is near the source of the data, not at a distance. By doing this, you reduce the communications bandwidth needed between sensors and a central datacenter. The analytics and knowledge generation are right at or near the source of the data.

The cloud, laptops, smartphones, tablets and sensors may be new things but the idea of decentralizing data processing is not. Remember the days of the mainframe computer?

The mainframe is/was a centralized approach to computing. All computing resources are at one location. That approach made sense once upon a time when computing resources were very expensive - and big. The first mainframe in 1943 weighed five tons and was 51 feet long. Mainframes allowed for centralized administration and optimized data storage on disc.

Access to the mainframe came via "dumb" terminals or thin clients that had no processing power. These terminals couldn't do any data processing, so all the data went to, was stored in, and was crunched at the centralized mainframe.

Much has changed. Yes, a mainframe approach is still used by businesses like credit card companies and airlines to send and display data via fairly dumb terminals. And it is costly. And slower. And when the centralized system goes down, all the clients go down. You have probably been in some location that couldn't process your order or or access your data because "our computers are down."

It turned out that you could even save money by setting up a decentralized, or “distributed,” client-server network. Processing is distributed between servers that provide a service and clients that request it. The client-server model needed PCs that could process data and perform calculations on their own in order to have applications to be decentralized. 

Google car

Google Co-Founder Sergey Brin shows U.S. Secretary of State John Kerry the computers inside one of
Google's self-driving cars - a data center on wheels. June 23, 2016. [State Department photo/ Public Domain]

Add faster bandwidth and the cloud and a host of other technologies (wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing) and you can compute at the edge.  Terms like local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlets, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality and more that I haven't encountered yet have all come into being.

Recently, I heard a podcast on "Smart Elevators & Self-Driving Cars Need More Computing Power" that got me thinking about the millions of objects (Internet of Things) connecting to the Internet now. Vehicles, elevators, hospital equipment, factory machines, appliances and a fast-growing list of things are making companies like Microsoft and GE put more computing resources at the edge of the network. 

This is computer architecture for people not things. In 2017, there were about 8 billion devices connect to the net. It is expected that in 2020 that number will be 20 billion. Do you want the sensors in your car that are analyzing traffic and environmental data to be sending it to some centralized resource - or doing it in your car? Milliseconds matter in avoiding a crash. You need the processing to be done on the edge. Cars are "data centers on wheels." 

Remember the early days of the space program? All the computing power was on Earth. You have no doubt heard the comparison that the iPhone in your pocket has hundreds or even thousands of times the computing power of the those early spacecraft. That was dangerous, but it was the only option. Now, much of the computing power is at the edge - even if the vehicle is also at the edge of our solar system. And things that are not as far off as outer space - like a remote oil pump - also need to compute at the edge rather than needing to connect at a distance to processing power. 

Plan to spend more time in the future at the edge.

Monetizing Your Privacy

data

Data is money. People are using your data to make money. What if you could sell, rather than give away, your private data? Is it possible that some day your data might be more valuable than the thing that is supplying your data?

John Ellis deals with big data and how it may change business models. He was Ford Motor Company’s global technologist and head of the Ford Developer Program, so cars are the starting place for the book, but beyond transportation, insurance, telecommunications, government and home building are all addressed. His book, The Zero Dollar Car: How the Revolution in Big Data will Change Your Life, is not as much about protecting our data as users, as it is about taking ownership of it. In essence, he is suggesting that users may be able to "sell" their data to companies (including data collectors such as Google) in exchange for free or reduced cost services or things.

I'm not convinced this will lead to a free/zero dollar car, but the idea is interesting. You are already allowing companies to use your data when you use a browser, shop at a website, use GPS on your phone or in a car device. The growth of the Internet of Things (IoT) means that your home thermostat, refrigerator, television and other devices are also supplying your personal data to companies. And many companies, Google, Apple and Amazon are prime examples, use your data to make money. Of course, this is also why Google can offer you free tools and services like Gmail, Documents etc.

Ellis talks about a car that pays for itself with your use and data, but the book could also be the Zero Dollar House or maybe an apartment. Big technology companies already profit from the sale of this kind of information. Shouldn't we have that option?

Duly noted: the data we supply also helps us. Your GPS or maps program uses your route and speed to calculate traffic patterns and reroute or notify you. The health data that your Apple watch or fitness band uploads can help you be healthier, and in aggregate it can help the general population too.

I remember years ago when Google began to predict flu outbreaks in geographic areas based on searches for flu-related terms. If all the cars on the road were Net-enabled and someone was monitoring the ambient temperature and their use of windshield wipers, what could be done with that data? What does an ambient temperature of 28 F degrees and heavy wiper use by cars in Buffalo, New York indicate? Snowstorm. Thousands or millions of roaming weather stations. And that data would be very useful to weather services and companies (like airlines and shipping companies) that rely on weather data - and are willing to pay for that data.

Am I saying that you should give up your privacy for money or services? No, but you should have that option - and the option to keep all your data private.