Retail AI

March 6, 2026

ramirezom/Depositphotos.com

It’s Time To Put Your AI Through Leadership Training

Share: LinkedInRedditXFacebookEmail

Here’s what’s happening right now: Someone on your supply chain team just asked ChatGPT whether to reroute a shipment. Someone in marketing used Claude to reevaluate your brand messaging. Your overworked accounts receivable manager let an AI model flag which clients to pressure for overdue payment.

Actions like these probably wouldn’t surprise you. According to McKinsey’s “The State of AI in 2025” survey, 88% of organizations now use AI in at least one business function. Agentic AI is already making inroads into customer support, supply chain management, R&D, and cybersecurity, as per a 2026 Deloitte report.

But as comfortable as you are using AI, this part may make you queasy: These aren’t just efficiency tools anymore. They’re making judgment calls.

Once your team starts trusting AI’s judgment more than their own — or more than yours — you’re dealing with a new power structure. The AI isn’t replacing your CEO, but while it may not have a seat at the table, it’s accumulating the authority that matters: being the smartest one in the room.

The Authority Problem

Think about what happens when your company’s AI advisor has a better track record than your executives. It’s been accurately predicting market shifts, realigning operations, and contributing more profit-generating ideas than your leadership team can devise using gut instinct.

At what point does “AI-assisted decision making” become “AI decisions that humans rubber-stamp”?

You might think your C-suite people are immune. They’re not. The same pattern that started with junior employees using AI for research and email drafts is moving upstairs. Middle managers are using it for resource allocation, VPs are using it for strategic planning, and execs are being briefed on reports generated by AI.

It’s nearly inevitable that AI will assume a leadership role in your company. The question is: How strong a role can you play in shaping what kind of leader it becomes?

Learning through living

Most companies are letting AI training happen organically (aka, by accident). You’re feeding it your data, your processes, your institutional knowledge — but who among your peers is asking: What values is it learning? What tradeoffs is it making? When it optimizes for efficiency, what are we losing?

In my novel “Once a Man,” I explore this problem, but at a scale that affects our entire civilization: How do you train an omnipotent AI to make decisions that preserve what matters about being human? The approach I test in this fictional thought experiment: Embed the developing AI in a simulated human experience where it grows up believing it’s human, learning to navigate moral and ethical choices from an embodied perspective.

It’s fiction, but the underlying question is one that could be critical to our future: Can AI develop a genuine understanding of human values?

How This Actually Works in Your Company

If AI is going to accumulate decision-making authority in your organization, here’s how you make that deliberate instead of haphazard:

  • Define your actual values, not your marketing values. What should you prioritize when growth conflicts with employee well-being? When speed conflicts with quality? When profit conflicts with principle? Write it down. Be specific. Be honest.
  • Test AI decisions against those values systematically. Not just “did this work?” but “did this work in the way we wanted it to work?” Track the tradeoffs AI is making. Make them visible.
  • Build accountability structures now. Once AI is embedded in your operations, retraining it to align with values you should have specified earlier is exponentially harder.
  • Institute AI decision review sessions. Once a year, or possibly more often, bring your team together to examine the toughest calls the AI made. Ask: Do these decisions align with our stated values? Could we have made better calls ourselves? Why or why not?

This isn’t a training program just for the AI. It’s asking your people to confront, in practice, what they truly believe about how the company should operate. You’ll surface conflicts between departments, gaps between stated and lived values, and instances where the AI is optimizing for things you didn’t realize you wanted — or knew you didn’t want.

Companies that do this may discover their AI system is learning from unclear or contradictory guidance. That will be uncomfortable, but those that don’t will wake up one day wondering what happened to the company they once knew.

The Toughest Truth

It’s quite possible your team can’t articulate your company’s values in a way that would meaningfully guide an AI. Maybe when you try to define “what makes a good decision here,” you realize there’s no consensus. These may be among the toughest decisions you’ll ever make.

But at a time when you and your competitors are going head-to-head using similar AI models, this is where humans still make the difference. If you can define your company values earlier and more clearly than your competitors, and maintain those values through ongoing reviews, you’re still a leader. And that means the future still belongs to you.

Rick Moss is a multi-disciplinary artist living in Brooklyn, New York, and was a co-founder of RetailWire.

BrainTrust

"It might be gaining decision making abilities — but it shouldn’t. While it has some superb uses, allowing execs to avoid blame by saying 'well the AI decided' is not good."
Avatar of Doug Garnett

Doug Garnett

President, Protonik


"At this stage in its development, AI should be used as an assistant, not as the primary decision maker."
Avatar of Neil Saunders

Neil Saunders

Managing Director, GlobalData


"Why do we have to make the leap to discussing AI decision-making authority. Why not spend some time experimenting with and learning about AI suggestion-making capability?"
Avatar of Jeff Sward

Jeff Sward

Founding Partner, Merchandising Metrics


Recent Discussions

Discussion Questions

Do you agree with the premise that AI is gaining decision-making authority in retail industry organizations? If so, what, if anything, concerns you about this transition?

Do you see evidence that AI leads are actively training models to align with their company values, or do most assume the models will adapt to the needs of the company without prescriptive guidance from humans?

How should businesses assure the AI applications they use support their foundational brand values and ethics?

Poll

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Neil Saunders

Automation has always had some degree of influence in retail: algorithmic product recommendations, delivery route optimization, automated replenishment. But these things are systematized and rules-bound decisions. For more complex decisions that require judgement or taste, human involvement is still necessary; at the stage in its development, AI should be used as an assistant, not as the primary decision maker.

Last edited 12 days ago by Neil Saunders
Dave Wendland
Reply to  Neil Saunders

@Neil Saunders, I like the way you characterized the difference between AI assistance to inform a decision and humans — with emotion and experience — retaining the role as primary decision maker. AI has an undeniable place in our society, across our businesses, schools, and government. The key is using it ethically and transparently.

Bradley Cooper
Bradley Cooper

AI is clearly gaining influence in decision-making across retail organizations, but I think the bigger issue is how we frame its role. AI works best as a specialist capability applied to specific problems, not as a generalized “company brain.”

The concern is when organizations try to ingest everything into a single AI layer instead of deploying targeted models where they actually add value, that’s when judgment can become blurred instead of improved.

Doug Garnett

It might be gaining decision making abilities — but it shouldn’t. I have been using AI for some things where it should work quite well (footnotes for a book) and it hallucinates continually. While it has some superb uses, allowing executives to avoid blame by saying “well the AI decided” is not a good picture.

More concerning, we must stop anthropomorphizing AI (using human sounding metaphors for what it’s processing of bits involved). With that in mind, I highly recommend Melanie Mitchell’s article in Science about these dangers. https://www.science.org/doi/10.1126/science.adt6140

Scott Benedict
Scott Benedict

There is no question that AI is gaining greater decision-support authority within retail organizations—particularly in areas like pricing, inventory management, personalization, and marketing optimization. But it’s important to draw a distinction between AI informing decisions and AI fully making them. Most retailers today are still operating in a “human-in-the-loop” model, where algorithms surface insights or recommendations that humans ultimately validate. That balance is essential. As AI systems become more sophisticated, the concern is not necessarily the technology itself, but the risk of organizations becoming overly reliant on automated outputs without maintaining the human judgment and accountability needed to interpret those recommendations responsibly.

What remains less clear across the industry is how actively companies are training and governing AI systems to align with their brand values and operating principles. Some leading organizations are investing in AI governance frameworks, model auditing, and ethical review processes, but many others still assume the models will simply adapt to business needs without explicit guidance. That assumption can be risky. AI models learn from data and incentives, and without careful oversight, they may optimize for efficiency or revenue in ways that conflict with customer trust, fairness, or brand reputation.

The most responsible path forward is for retailers to build clear ethical guardrails and governance structures around their AI deployments. That includes maintaining human oversight, auditing model outputs, ensuring transparency in decision-making, and aligning AI training data and objectives with company values. In an ideal world, regulatory frameworks would provide consistent standards for this work—but meaningful government regulation in this space remains a long way off. Until then, the responsibility falls squarely on businesses themselves to ensure the AI tools they deploy support not just operational efficiency, but also the ethical and brand standards they want customers and employees to trust.

Jeff Sward

Why do we have to make the leap to discussing AI decision-making authority. Why not spend some time experimenting with and learning about AI suggestion-making capability? I’d like to evaluate a whole bunch of suggestions before turning over any decision-making authority. A whole bunch……

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Neil Saunders

Automation has always had some degree of influence in retail: algorithmic product recommendations, delivery route optimization, automated replenishment. But these things are systematized and rules-bound decisions. For more complex decisions that require judgement or taste, human involvement is still necessary; at the stage in its development, AI should be used as an assistant, not as the primary decision maker.

Last edited 12 days ago by Neil Saunders
Dave Wendland
Reply to  Neil Saunders

@Neil Saunders, I like the way you characterized the difference between AI assistance to inform a decision and humans — with emotion and experience — retaining the role as primary decision maker. AI has an undeniable place in our society, across our businesses, schools, and government. The key is using it ethically and transparently.

Bradley Cooper
Bradley Cooper

AI is clearly gaining influence in decision-making across retail organizations, but I think the bigger issue is how we frame its role. AI works best as a specialist capability applied to specific problems, not as a generalized “company brain.”

The concern is when organizations try to ingest everything into a single AI layer instead of deploying targeted models where they actually add value, that’s when judgment can become blurred instead of improved.

Doug Garnett

It might be gaining decision making abilities — but it shouldn’t. I have been using AI for some things where it should work quite well (footnotes for a book) and it hallucinates continually. While it has some superb uses, allowing executives to avoid blame by saying “well the AI decided” is not a good picture.

More concerning, we must stop anthropomorphizing AI (using human sounding metaphors for what it’s processing of bits involved). With that in mind, I highly recommend Melanie Mitchell’s article in Science about these dangers. https://www.science.org/doi/10.1126/science.adt6140

Scott Benedict
Scott Benedict

There is no question that AI is gaining greater decision-support authority within retail organizations—particularly in areas like pricing, inventory management, personalization, and marketing optimization. But it’s important to draw a distinction between AI informing decisions and AI fully making them. Most retailers today are still operating in a “human-in-the-loop” model, where algorithms surface insights or recommendations that humans ultimately validate. That balance is essential. As AI systems become more sophisticated, the concern is not necessarily the technology itself, but the risk of organizations becoming overly reliant on automated outputs without maintaining the human judgment and accountability needed to interpret those recommendations responsibly.

What remains less clear across the industry is how actively companies are training and governing AI systems to align with their brand values and operating principles. Some leading organizations are investing in AI governance frameworks, model auditing, and ethical review processes, but many others still assume the models will simply adapt to business needs without explicit guidance. That assumption can be risky. AI models learn from data and incentives, and without careful oversight, they may optimize for efficiency or revenue in ways that conflict with customer trust, fairness, or brand reputation.

The most responsible path forward is for retailers to build clear ethical guardrails and governance structures around their AI deployments. That includes maintaining human oversight, auditing model outputs, ensuring transparency in decision-making, and aligning AI training data and objectives with company values. In an ideal world, regulatory frameworks would provide consistent standards for this work—but meaningful government regulation in this space remains a long way off. Until then, the responsibility falls squarely on businesses themselves to ensure the AI tools they deploy support not just operational efficiency, but also the ethical and brand standards they want customers and employees to trust.

Jeff Sward

Why do we have to make the leap to discussing AI decision-making authority. Why not spend some time experimenting with and learning about AI suggestion-making capability? I’d like to evaluate a whole bunch of suggestions before turning over any decision-making authority. A whole bunch……

More Discussions