Computers (and AI) Are Not Managers

doc

Internal IBM document, 1979 (via Fabricio Teixeira)

I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn't part of the conversation. "A computer must never make a management decision," said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can't be held accountable.

Is the same thing true concerning artificial intelligence 46 years later?

I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.

Why should AI be trusted in a limited way on certain aspects of decision-making?

One reason that goes back at least 46 years is that it lacks "emotional intelligence." Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations — all things AI can't fully grasp or replicate.

Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.

Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can't reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.

AI systems can unintentionally inherit biases from the data they're trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.

Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.

So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:

Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.

Hiring and Recruitment: AI tools can screen résumés, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.

Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.

Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.

Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.

Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.

Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.

As I write this in March 2025, the news is full of stories of DOGE and Elon Musk's team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and "improve efficiency."  AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won't be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.

AI Agents

ai assistant

AI agents are something of concern for OpenAI, Google, and any other players. "AI agents" are software programs designed to perform specific tasks or solve problems by using artificial intelligence techniques. These agents can work autonomously or with minimal human intervention, and they're capable of learning from data, making decisions, and adapting to new situations.

Gartner suggests that agentic AI is the most important strategic technology for 2025 and beyond. The tech analyst predicts that, by 2028, at least 15% of day-to-day work decisions will be taken autonomously through agentic AI, up from 0% in 2024. Does that excite or frighten you?

They can automate processes, analyze data, and interact with users or other systems to achieve specific goals. You probably already interact with them in applications (Siri or Alexa), customer service chatbots, and recommendation systems (Netflix or Amazon). They may be less obvious to you when using an autonomous vehicle or a financial trading system.

There are many categories into which we might place these agents because there are different types of AI agents, each with unique capabilities and purposes:

Here are some possible categorizations:

Reactive agents respond to specific stimuli and do not have a memory of past events. They work well in environments with clear, predictable rules.

Model-based agents have a memory and can learn from past experiences. They use this knowledge to predict future events and make decisions.

Goal-based agents are designed to achieve specific goals. They use planning and reasoning techniques to determine the best actions to take to reach their objectives.

Utility-based agents consider multiple factors and choose actions that maximize their overall utility or benefit. They can balance competing goals and make trade-offs.

Teacher using AI assistant
Learning agents can improve their performance over time by learning from their experiences. They use techniques like machine learning to adapt to new situations and improve their decision-making abilities.You could also categorize agents in other ways, for example, in an educational contex.

For personalized learning, agents can adapt educational content to meet individual students' needs, learning styles, and pace. By analyzing data on students' performance and preferences, AI can recommend personalized learning paths and resources. In a related way, intelligent tutoring systems can provide one-on-one tutoring by offering explanations, feedback, and hints the way that a human tutor might. They might even be able to create more inclusive learning environments by providing tools like speech-to-text, text-to-speech, and translation services, ensuring that all students have access to educational content. By analyzing students' performance data, they could identify at-risk students and provide early interventions to help them succeed.

AI agents can automate administrative tasks for faculty, such as grading, attendance tracking, and scheduling, freeing up educators' time to focus more on teaching and interacting with students.

Agents can "assist" in creating educational materials. I would hope faculty would be closely monitoring AI creation of tests, quizzes, lesson plans, and interactive simulations.

Though I see predictions of fully AI-powered virtual classrooms that can facilitate remote learning, I believe this is the most distant application - and probably the one that most makes faculty apprehensive.

Fear of Becoming Obsolete

fearful workers

The term FOBO appeared in something I was reading recently. It is the fear of becoming obsolete (FOBO) and it is very much a workplace fear and generally connected to aging workers and anyone who fears that they will be replaced by technology.

Of course, AI is a large part of this fear. It's not a new fear. Workers have always considered that they would be considered obsolete as they aged, especially if they did not have the skills that younger employees brought to the workplace. It has been at least two decades of hearing predictions that robots would replace workers. In fact, that was the case, though not to the levels that were sometimes predicted. Artificial intelligence is less obvious as it makes inroads into our work and outside life.

Employers and workers need to be better at recognizing the ways AI is already here and being used. Approximately four in ten Americans use Face ID to log into at least one app on their phone each day. That is about 136 million people. How many think about that as AI?

If you have an electric vehicle, A.I.-powered systems work to manage the energy output. In your gas-powered car, you very likely use an AI-powered GPS for navigation.  

One survey I saw found that just 44 percent of the global workforce believe they interact today with AI in their personal lives. But when asked if they used GPS maps and navigation, 66 percent said yes. What about predictive product/entertainment suggestions, such as in Netflix and Spotify?  50 percent said yes.  Do you use text editors or autocorrect? A yes from 47 percent. 46 percent use virtual home assistants, such as Alexa and Google Assistant. Even chatbots like ChatGPT and CoPilot - which are less hidden and more proactive for a user - had a 31 percent yes response.

Most of these are viewed as positive uses of AI, but not all uses are viewed as positive or at least are viewed as somewhat negative. One example of that category is the AI not so positive is its use in filling up newsfeeds. Each social media network - Facebook, Twitter, Instagram et al  - has its own A.I.-powered algorithm. It is constantly customizing billions of users’ feeds. You click a like button, or just pause on a post for more than a few seconds,and that information changes your feed accordingly. Plus, the algorithm is made to push certain things to users that were not suggested by your activity but by sponsors or owners. This aspect has been widely criticized since Elon Musk took over Twitter-X, but all the platforms do it to some degree.

Some common applications are both positive and negative. Take the use of artificial intelligence in airports all over the world. It is being used to screen passengers passing through security checkpoints. At least 25 airports across the U.S., including Reagan National in Washington D.C. and Los Angeles International Airport, have started using A.I.-driven facial recognition as part of a pilot project. Eventually, the Transportation Security Administration (TSA) plans to expand the ID verification technology to more than 400 airports. This can speed up your passage through security which is something everyone would love to see, but what else is being done with that data, and will the algorithm flag people for the wrong reasons?

Do you want to push back on FOBO, particularly in the workplace? Some suggestions:
Continuous Learning: Stay curious and keep updating your skills. Whether it’s taking a course, attending workshops, or learning new technologies, continuous education is key.
Networking: Engage with your professional community. Networking can provide insights into industry trends and offer support and advice.
Adaptability: Embrace change and be open to new ideas. Flexibility can help you stay relevant.
Mindset Shift: Focus on your unique strengths and contributions. Everyone has something valuable to offer, and feeling obsolete often stems from undervaluing your skills.
Digital Detox: Sometimes, limiting your exposure to social media and other sources of comparison can reduce feelings of inadequacy.
Seek Feedback: Regularly seek feedback from peers, mentors, and colleagues to understand your areas of improvement and strengths.

An AI Chatbot Glossary

AI car dashboardEven some less-tech people have been experimenting with chatbots now that they are embedded in Google Gemini Apple and Microsoft CoPilot sites. A few of my less-tech friends have asked me what a term means concerning AI chatbots. Of course, they could easily ask a chatbot to define any chatbot terms, but it is useful to have a glossary.

I have had friends tell me that they have had some interesting "conversations" with machines. "They almost seem human," said one friend who has no idea what a Turing test would do. That sounds like fun, but the potential of generative AI could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute.

Besides the obviously popular AI tools, there are others like Anthropic's Claude, the Perplexity AI search tool and gadgets from Humane and Rabbit.

A glossary would range from very basic terms. such as "prompt," which is the suggestion or question you enter into an AI chatbot to get a response which might lead you to "prompt chaining:" That is the ability of AI to use information from previous interactions to produce future responses.

What does it mean if a tool is "agentive?" That might be a system or model that exhibits agency with the ability to autonomously pursue actions to achieve a goal. This is where we enter an area that scares some people. An agentive model can act without constant supervision. Consider autonomous car features, such as brakes that apply without the user touching the pedal, or pulls a car back into the lined lanes.

Speaking of AI fears, we have "emergent behavior:" This is when an AI model exhibits unintended abilities.

Most AI tools warn about assuming that what answer is given is 100% correct. A "hallucination" is an incorrect response from AI. Even the AI creators don't always know the reasons for this aren't entirely known.

"Weak AI, AKA "narrow AI" is focused on a particular task and can't learn beyond its skill set. As marvelous as image creating AI can be, it has just one task.

A test in which a model must complete a task without being given the requisite training data is called "zero-shot learning." AI trained to identify cars being able to recognize vans, pickup trucks or tractor trailers.

More terms and some reviews of chatbots at cnet.com/tech/services-and-software/