Are You Ready For Y2K38?

 

Do you remember the Y2K scare? It is also known as the Millennium Bug. On this Eve of a new year, I am recalling this scare that stemmed from a widespread concern in the late 1990s that many computer systems would fail when the year changed from 1999 to 2000.

Why? Many older computer systems and software programs represented years using only the last two digits (e.g., "1999" was stored as "99"). It was feared that when 2000 arrived, these systems might interpret "00" as 1900 instead of 2000, leading to several problems.

Systems that relied on accurate date calculations could produce errors or fail entirely. For example, financial systems calculating interest rates or loan payments might miscalculate. Concerns arose about critical systems in utilities, transportation, healthcare, and government shutting down. Files or databases might become corrupted due to incorrect data processing.

Probably the greatest concern was in banking and finance where it was feared that miscalculated transactions, stock market crashes, or ATM malfunctions might occur.

Some people predicted power grid failures or water system disruptions, and aviation navigation systems and air traffic control collapsing.

What if there were malfunctioning military systems, including nuclear launch systems?

And so, billions of dollars were spent worldwide to identify, update, and test potentially vulnerable systems. IT professionals worked tirelessly to ensure compliance before the deadline.

What Happened? The transition to the year 2000 was largely uneventful. A few minor issues were reported, but there were no catastrophic failures. It wasn't that there was no reason to be concerned, but the successful outcome is often credited to the massive preventive effort rather than the fears being overblown.

The Y2K scare highlighted the importance of forward-thinking in software development and helped establish rigorous practices for handling date and time in computing. If you want to start preparing or worrying now for the next similar scare, the Y2K38 Problem (Year 2038 Issue) arises from how older computer systems store time as a 32-bit integer, counting seconds since January 1, 1970 (Unix time). On January 19, 2038, this count will exceed the maximum value for a 32-bit integer, causing a rollover that could result in misinterpreted dates or system crashes. This potentially affects embedded systems, infrastructure, and older software. Modern systems are increasingly being updated to 64-bit time representations, which kicks the problem far into the future.

Natural Language Processing (NLP)

Natural Language Processing (NLP) is a field of artificial intelligence focused on enabling computers to understand, interpret, and generate human language. The "natural" part is that the goal is that this AI language use is meaningful and contextually relevant. This might be used for tasks such as language translation, sentiment analysis, and speech recognition.

NLP sample
NLP sample by Seobility - License: CC BY-SA 4.0

Search engines leverage NLP to improve various aspects of search. Understanding what a user means when searching for a search string and understanding what the different pages on the web are about and what questions they answer are all vital aspects of a successful search engine.

According to AWS, companies commonly use NLP for these automated tasks:
•    Process, analyze, and archive large documents
•    Analyze customer feedback or call center recordings
•    Run chatbots for automated customer service
•    Answer who-what-when-where questions
•    Classify and extract text

NLP crosses over into other fields. Here are three.

Computational linguistics is the science of understanding and constructing human language models with computers and software tools. Researchers use computational linguistics methods, such as syntactic and semantic analysis, to create frameworks that help machines understand conversational human language. Tools like language translators, text-to-speech synthesizers, and speech recognition software are based on computational linguistics. 

Machine learning is a technology that trains a computer with sample data to improve its efficiency. Human language has several features like sarcasm, metaphors, variations in sentence structure, plus grammar and usage exceptions that take humans years to learn. Programmers use machine learning methods to teach NLP applications to recognize and accurately understand these features from the start.

Deep learning is a specific field of machine learning which teaches computers to learn and think like humans. It involves a neural network that consists of data processing nodes structured to resemble the human brain. With deep learning, computers recognize, classify, and co-relate complex patterns in the input data.

Overview of NLP

Neural Networks and Artificial General Intelligence

neural network

A neural network is a type of deep learning model within the broader field of machine learning (ML) that simulates the human brain.

It was long thought that the way to add "intelligence" to computers was to try to imitate or model the way the brain works. That turned out to be a very difficult - some might say impossible - goal.

Neural networks process data through interconnected nodes or neurons arranged in layers—input, hidden, and output. Each node performs simple computations, contributing to the model’s ability to recognize patterns and make predictions.

These deep learning neural networks are effective in handling tasks such as image and speech recognition, which makes them a key component of many AI applications.

When neural networks are being "trained," they make random guesses. A node on the input layer randomly decides which of the nodes in the first hidden layer to activate, and then those nodes randomly activate nodes in the next layer, and so on, until this random process reaches the output layer. If you know of any of the large language models (LLM) then you have seen this at work. GPT-4 has around 100 layers, with tens or hundreds of thousands of nodes in each layer.

Have you ever clicked thumbs-up or thumbs-down to a computer’s suggestion? Then you have contributed to the reinforcement learning of that network.

I have found that predicting the future of technology is rarely accurate, and predictions on AI have generally been wrong. In 1970, one of the top AI researchers predicted that “in three to eight years, we will have a machine with the general intelligence of an average human being.” Well, that did not happen.

Most current AI systems are "narrow AI" which means they are specialized to perform specific tasks very well (such as recognizing faces) but lack the ability to generalize across different tasks. Human intelligence involves a complex interplay of reasoning, learning, perception, intuition, and social skills, which are challenging to replicate in machines.

That idea of reaching artificial general intelligence (AGI) has its own set of predictions with experts having varying opinions on when AGI might be achieved,. I have seen optimistic estimates of a few decades to more conservative views spanning centuries or even beyond. It is hard to predict but breakthroughs in AI research, particularly in areas like reinforcement learning, neural architecture search, and computational neuroscience, could accelerate progress towards AGI.

Is Your 2024 Phone Finally Smarter Than You?

playing chess against a smartphone

The prediction game is a tough one to win. I wrote a piece in 2013 titled "In 4 Years Your Phone Will Be Smarter Than You (and the rise of cognizant computing)" That would mean I should have checked back in 2017 to see if my predictions came to pass. Well, not my predictions but those from an analysis of market research firm Gartner. I did check back at the end of 2022 and now I'm checking in again after just a few years.

That original report was predicting that it wouldn't have as much to do with hardware, but rather from the growth of data and computational ability in the cloud. That seems to be true about hardware. My smartphone for 2024 is not radically different from the one I had in 2017. More expensive, better camera, new apps, but still the same basic functions as back then. It looks about the same too. New radical changes.

If phones seem smarter it means that you have a particular definition of "smart." If smart means being able to recall information and make inferences, then my phone, my Alexa, and the Internet are all smarter than me. And in school, remembering information and making inferences are still a big part of being smart. But it's not all of it.

"Cognizant computing" was part of that earlier piece. That is software and devices that predict your next action based on personal data already gathered about you. It might at a low level suggest a reply to an email. At a high level, it might suggest a course of treatment to your doctor. The term "cognizant computing" doesn't seem to occur much anymore. In fact, looking for it today on Wikipedia brought the result "The page "Cognizant computing" does not exist."

It seems to have been grouped in with machine learning, natural language processing, computer vision, and human-computer interaction and any intelligent systems that can perceive and understand its environment, interact with users in natural ways, and adapt behavior based on changing circumstances. I think the average person would say to all that, "Oh, you mean AI?"

It's there in virtual assistants (like Siri, Alexa, or Google Assistant), personalized recommendation systems (such as those used by Netflix or Amazon), smart home devices, and various other domains where systems need to understand and respond to user needs effectively.

I asked a chatbot if it was an example of cognizant computing and it replied, "Yes, a chatbot can be considered an example of cognizant computing, particularly if it is designed to exhibit certain key characteristics."

The characteristics it meant are context awareness, personalization, adaptability, and natural interaction.

Chatbots can be aware of the context of the conversation and may remember previous interactions with the user, understand the current topic of conversation, and adapt their responses accordingly. In these ways, it can personalize interactions. That shows its adaptability and ability to learn from user interactions and improve over time. Using natural language processing (NLP) techniques to understand and generate human-like responses makes for more natural conversations between humans.

Is my smartphone smarter than me in 2024? It is smarter, but I think I still have some advantages. I'll check in again in a few years.