Article

AI Strategy in 2026: 6 Real-World Insights from the C-Suite

As we settle into 2026, one thing is clear: executive conversations about AI have evolved beyond hype. Launch Consulting recently hosted a private, candid roundtable moderated by our Chief Strategy Officer, Russ Whitman, featuring leaders across automotive, tech, and manufacturing.  

With so much talk and hype around AI, we wanted to start having more conversations with leaders to better understand how they are approaching it, where they are succeeding, and, just as importantly, where they are failing.  The first meeting of our roundtable series did not disappoint! We will continue to connect throughout the year and share our learnings. If you'd like to join one, drop us a line.

For our first roundtable, we asked our featured guest, Bob Rapp, Head of AI Data Science at General Motors, to share how he’s navigating AI prioritization, governance, and value creation. He also shared his perspective of how today’s leaders must understand to make AI deliver real business results.

Here are six themes that highlight how these forward-thinking execs are deploying AI in 2026:

  1. The "Quick Win" Trap: Why Fast Isn't Always Forward
  2. AI Prioritization Must Be Ruthlessly Strategic
  3. Redefining the Working Relationship between Humans and AI  
  4. The Risky Economics of Generative AI  
  5. Encourage Safe, Personal AI Experimentation
  6. AI is Fast. Your Strategy Better Be Faster.

Each of these themes reflects a hard-earned lesson from the front lines of enterprise AI. Below, we break down what they mean and how they can shape your own AI strategy in the year ahead.

1. The "Quick Win" Trap: Why Fast Isn't Always Forward

The term "quick win" was met with outright rejection. Bob Rapp put it sharply: "No one needs a quick win. We need good, gooder, and goodest, and products we can ship."

Other executives in the roundtable warned against the lure of superficial progress that doesn’t tie to business value. Instead, they advocated for:

  • Prioritizing long-term transformation over short-term demos.
  • Measuring success in terms of lasting operational impact.
  • Avoiding projects that merely check an innovation box.

A common thread: Quick wins often come with hidden costs: misaligned incentives, wasted cycles, and unrealized potential. As one leader framed it, "We’re not here to impress with shiny objects. We’re here to move the needle."

At Launch, we echo this approach in our work with clients: prioritize measurable value and ensure the solution isn’t just fast, but scalable and sustainable. A quick win that dies in pilot is no win at all.

2. AI Prioritization Must Be Ruthlessly Strategic

Rapp challenged the group: "Nobody has an AI problem. We have business problems that might require AI."

Executives emphasized a disciplined approach to project selection:

  • Start with a clearly defined business outcome (e.g., saving time, generating revenue).
  • Evaluate whether AI is actually necessary.
  • Kill projects that lack committed business sponsorship.

Executives revealed that there is an innumerable amount of AI projects proposed, but only a few are worth investing their time into. Why? Because unless the business side is rolling up its sleeves and owning the outcome, the project doesn’t move forward.

We must prioritize the business imperative first, then map technology solutions that deliver measurable outcomes.

3. Redefining the Working Relationships Between Humans and AI

Tech leaders are transitioning from a world where "code was cheap" to one where design and oversight are everything. AI generation is abundant. What’s scarce? Thoughtful human review.

Rapp shared he heard that some organizations now spend 10x more time on human review than a year ago because they're using so many auto-generation tools. He said, “When you’re producing with tools that are fairly random, you end up with way more work on the back end to verify those results.”  

Teams don’t need to use more AI.  Teams need humans who can direct AI to produce higher quality outputs.

This shift demands:

  • Stronger product and design thinking.
  • System-level integration between AI, hardware, and human workflows.
  • A new mindset: AI is not the solution. It's a component of a solution.

Rapp put it bluntly: "If there's no voice of the customer in the design, we disengage. We don't need more slide decks. We need shippable software."

Rapp shared a powerful story of collaborating on a factory floor with an employee who had spent 18 years inspecting automotive paint. With AI tools, they were able to prototype a solution in real-time. The employee’s response? "This is the first time someone has really understood my problem."

Using AI wasn’t the fix. Humans collaborating and directing AI produced a solution that actually helped fix the problem.

That’s the magic: AI enabling deeper human understanding, not replacing it.

Humans + AI Collaboration: The Secret Sauce

At Launch, we define AI-powered transformation through our Nexus AI framework, an operating model for human-AI collaboration grounded in a Director-Verifier approach.  

It connects human intent with AI execution through deliberate direction and verification loops. Humans frame the problem. AI generates possibilities. Humans validate, refine, and decide.

This dynamic was illustrated in a real‑world example shared by Rapp, where factory workers used an AI tool to describe their paint‑inspection challenges in plain language, rapidly generating a prototype that wasn’t immediately shippable but was directionally correct. The humans directed the intent and context, AI helped translate that intent into a tangible starting point, and humans then verified what “good” looked like before moving it into a production‑ready solution.

AI is only one component of the solution.  To create real impact, it must be paired with:

  • Human-centered design
  • Clear strategy and governance
  • Cultural adoption and trust

When organizations shift from “AI can handle that process” to a Director-Verifier model where humans direct, AI executes, and humans verify output, they’ll create teams that are poised to create real ROI and impact with whatever AI solution they implement.

4. The Risky Economics of Generative AI

One of the most eye-opening moments in the discussion wasn’t about strategy or governance. It was about cost.  

Rapp shared that he personally spent $22,000 on AI in 2025 – and expects that number to hit $38,000 this year.

That personal investment mirrors a growing enterprise reality: generative AI isn’t cheap. In fact, 68% of companies are planning to invest $50 million to $250 million into generative AI over the next year, according to KPMG’s latest AI Quarterly Pulse Survey.

What begins as experimentation quickly becomes infrastructure. Token-based pricing, rising compute costs, and expanding usage across teams are turning AI from innovation budget into operational line item.

As adoption scales, some companies are starting to:

  • Cap user access to premium tools as they see AI spend drastically increase  
  • Reevaluate which models are appropriate for which tasks – not everything needs the latest reasoning model
  • Audit AI ROI with the same rigor applied to headcount or capital spend

"When tokens cost more than a developer’s time, you have to ask hard questions," Rapp noted.

The shift here is subtle but significant. AI is no longer just about capability — it’s about cost discipline and value alignment. Leaders aren’t asking can we use AI? They’re asking where does it truly create leverage?

At Launch, we often talk about applying a FinOps mindset to AI, where we try to be thoughtful about spend today and tomorrow. This sustainable AI transformation requires architectural decisions that balance performance, cost, and business value from the outset.  

The organizations that treat AI as an economic system, not just a technical upgrade, are the ones building a durable AI strategy that drives measurable advantages.

5. Encouraging Safe, Personal AI Experimentation

As the session wrapped, Rapp offered a practical piece of advice: “Find an agent to do a low-risk behavior for you personally before you deploy them at work.”

His message wasn’t about building business applications. It was about building understanding.

Rapp encouraged leaders to experiment with AI agents in their own lives first, using low-risk services where mistakes are harmless and learning is fast. The goal is to observe how agents behave, where they go wrong, and how much effort it actually takes to supervise them.

Personal experimentation allows leaders to:

  • Understand how agents behave in the real world.
  • Experience failure modes firsthand.
  • Learn how oversight, monitoring, and boundaries actually work.

To illustrate the stakes, he shared a cautionary example of an airline whose agent mistakenly priced Olympic flights at one dollar. It’s the kind of error that might feel amusing in a personal sandbox but becomes costly when money, access, or operational responsibility are involved.

That’s why Rapp stressed starting small. Before giving an agent real authority — or something as simple as a credit card — leaders need firsthand experience with how difficult it can be to monitor, guide, correct, or even shut down an agent once it’s in motion.

The takeaway was clear: go try an agent yourself. Learn in a low-risk environment first. That hands-on experience builds the intuition leaders need before agents ever move into more serious settings.

6. AI is Fast. Your Strategy Better Be Faster.

Perhaps the most resonant theme was the blistering pace of change. Cloud-native agents, open-source LLMs, and new model releases are reshaping what’s possible almost weekly.

“As strategy creators, we’re often asked to predict the future,” Whitman says, “but our strategy probably changes every 12-18 months to keep up with the tech.  There are some foundational things that must be in place to make any investment worth it.”  

As Rapp put it: "What I said about Google six weeks ago is already outdated. You have to adjust for speed – constantly."

Execs agreed: 12-month planning cycles are dead. Now it's about:

  • Building agile strategy teams that can reassess bets every 6–8 weeks.
  • Prioritizing adaptability over precision.
  • Expecting disruption and engineering resilience.

At Launch, we help customers build resilient transformation roadmaps that are adaptable by design. We don’t anchor you to one model or platform; we equip you to evolve.

The Ultimate Executive Takeaway: AI's Success Depends on People

Whether it was empowering frontline teams, reducing toil, or rethinking the role of design and oversight, executives emphasized that human experience must shape every part of AI transformation.

Despite the tools, tokens, and tech, the conversation always came back to people. Rapp's example of working with an assembly-line veteran proved just that.  

That’s what AI is really for: unlocking understanding, enabling creativity, and building solutions that matter.

The leaders in our roundtable aren’t chasing hype. They’re orchestrating change thoughtfully, strategically, and always with humans in the loop.

Stay tuned. This is only the beginning of the enterprise AI conversation in 2026.

Ready to Take the Next Step?

If you're navigating your own AI strategy—or still deciding where to begin—Launch Consulting can help you connect the dots between technology, humanity, and purpose.

Connect with a Launch Navigator to begin your transformation.  

Let's build what comes next. Together.

Back to top
Table of Contents
Back to top

As we settle into 2026, one thing is clear: executive conversations about AI have evolved beyond hype. Launch Consulting recently hosted a private, candid roundtable moderated by our Chief Strategy Officer, Russ Whitman, featuring leaders across automotive, tech, and manufacturing.  

With so much talk and hype around AI, we wanted to start having more conversations with leaders to better understand how they are approaching it, where they are succeeding, and, just as importantly, where they are failing.  The first meeting of our roundtable series did not disappoint! We will continue to connect throughout the year and share our learnings. If you'd like to join one, drop us a line.

For our first roundtable, we asked our featured guest, Bob Rapp, Head of AI Data Science at General Motors, to share how he’s navigating AI prioritization, governance, and value creation. He also shared his perspective of how today’s leaders must understand to make AI deliver real business results.

Here are six themes that highlight how these forward-thinking execs are deploying AI in 2026:

  1. The "Quick Win" Trap: Why Fast Isn't Always Forward
  2. AI Prioritization Must Be Ruthlessly Strategic
  3. Redefining the Working Relationship between Humans and AI  
  4. The Risky Economics of Generative AI  
  5. Encourage Safe, Personal AI Experimentation
  6. AI is Fast. Your Strategy Better Be Faster.

Each of these themes reflects a hard-earned lesson from the front lines of enterprise AI. Below, we break down what they mean and how they can shape your own AI strategy in the year ahead.

1. The "Quick Win" Trap: Why Fast Isn't Always Forward

The term "quick win" was met with outright rejection. Bob Rapp put it sharply: "No one needs a quick win. We need good, gooder, and goodest, and products we can ship."

Other executives in the roundtable warned against the lure of superficial progress that doesn’t tie to business value. Instead, they advocated for:

  • Prioritizing long-term transformation over short-term demos.
  • Measuring success in terms of lasting operational impact.
  • Avoiding projects that merely check an innovation box.

A common thread: Quick wins often come with hidden costs: misaligned incentives, wasted cycles, and unrealized potential. As one leader framed it, "We’re not here to impress with shiny objects. We’re here to move the needle."

At Launch, we echo this approach in our work with clients: prioritize measurable value and ensure the solution isn’t just fast, but scalable and sustainable. A quick win that dies in pilot is no win at all.

2. AI Prioritization Must Be Ruthlessly Strategic

Rapp challenged the group: "Nobody has an AI problem. We have business problems that might require AI."

Executives emphasized a disciplined approach to project selection:

  • Start with a clearly defined business outcome (e.g., saving time, generating revenue).
  • Evaluate whether AI is actually necessary.
  • Kill projects that lack committed business sponsorship.

Executives revealed that there is an innumerable amount of AI projects proposed, but only a few are worth investing their time into. Why? Because unless the business side is rolling up its sleeves and owning the outcome, the project doesn’t move forward.

We must prioritize the business imperative first, then map technology solutions that deliver measurable outcomes.

3. Redefining the Working Relationships Between Humans and AI

Tech leaders are transitioning from a world where "code was cheap" to one where design and oversight are everything. AI generation is abundant. What’s scarce? Thoughtful human review.

Rapp shared he heard that some organizations now spend 10x more time on human review than a year ago because they're using so many auto-generation tools. He said, “When you’re producing with tools that are fairly random, you end up with way more work on the back end to verify those results.”  

Teams don’t need to use more AI.  Teams need humans who can direct AI to produce higher quality outputs.

This shift demands:

  • Stronger product and design thinking.
  • System-level integration between AI, hardware, and human workflows.
  • A new mindset: AI is not the solution. It's a component of a solution.

Rapp put it bluntly: "If there's no voice of the customer in the design, we disengage. We don't need more slide decks. We need shippable software."

Rapp shared a powerful story of collaborating on a factory floor with an employee who had spent 18 years inspecting automotive paint. With AI tools, they were able to prototype a solution in real-time. The employee’s response? "This is the first time someone has really understood my problem."

Using AI wasn’t the fix. Humans collaborating and directing AI produced a solution that actually helped fix the problem.

That’s the magic: AI enabling deeper human understanding, not replacing it.

Humans + AI Collaboration: The Secret Sauce

At Launch, we define AI-powered transformation through our Nexus AI framework, an operating model for human-AI collaboration grounded in a Director-Verifier approach.  

It connects human intent with AI execution through deliberate direction and verification loops. Humans frame the problem. AI generates possibilities. Humans validate, refine, and decide.

This dynamic was illustrated in a real‑world example shared by Rapp, where factory workers used an AI tool to describe their paint‑inspection challenges in plain language, rapidly generating a prototype that wasn’t immediately shippable but was directionally correct. The humans directed the intent and context, AI helped translate that intent into a tangible starting point, and humans then verified what “good” looked like before moving it into a production‑ready solution.

AI is only one component of the solution.  To create real impact, it must be paired with:

  • Human-centered design
  • Clear strategy and governance
  • Cultural adoption and trust

When organizations shift from “AI can handle that process” to a Director-Verifier model where humans direct, AI executes, and humans verify output, they’ll create teams that are poised to create real ROI and impact with whatever AI solution they implement.

4. The Risky Economics of Generative AI

One of the most eye-opening moments in the discussion wasn’t about strategy or governance. It was about cost.  

Rapp shared that he personally spent $22,000 on AI in 2025 – and expects that number to hit $38,000 this year.

That personal investment mirrors a growing enterprise reality: generative AI isn’t cheap. In fact, 68% of companies are planning to invest $50 million to $250 million into generative AI over the next year, according to KPMG’s latest AI Quarterly Pulse Survey.

What begins as experimentation quickly becomes infrastructure. Token-based pricing, rising compute costs, and expanding usage across teams are turning AI from innovation budget into operational line item.

As adoption scales, some companies are starting to:

  • Cap user access to premium tools as they see AI spend drastically increase  
  • Reevaluate which models are appropriate for which tasks – not everything needs the latest reasoning model
  • Audit AI ROI with the same rigor applied to headcount or capital spend

"When tokens cost more than a developer’s time, you have to ask hard questions," Rapp noted.

The shift here is subtle but significant. AI is no longer just about capability — it’s about cost discipline and value alignment. Leaders aren’t asking can we use AI? They’re asking where does it truly create leverage?

At Launch, we often talk about applying a FinOps mindset to AI, where we try to be thoughtful about spend today and tomorrow. This sustainable AI transformation requires architectural decisions that balance performance, cost, and business value from the outset.  

The organizations that treat AI as an economic system, not just a technical upgrade, are the ones building a durable AI strategy that drives measurable advantages.

5. Encouraging Safe, Personal AI Experimentation

As the session wrapped, Rapp offered a practical piece of advice: “Find an agent to do a low-risk behavior for you personally before you deploy them at work.”

His message wasn’t about building business applications. It was about building understanding.

Rapp encouraged leaders to experiment with AI agents in their own lives first, using low-risk services where mistakes are harmless and learning is fast. The goal is to observe how agents behave, where they go wrong, and how much effort it actually takes to supervise them.

Personal experimentation allows leaders to:

  • Understand how agents behave in the real world.
  • Experience failure modes firsthand.
  • Learn how oversight, monitoring, and boundaries actually work.

To illustrate the stakes, he shared a cautionary example of an airline whose agent mistakenly priced Olympic flights at one dollar. It’s the kind of error that might feel amusing in a personal sandbox but becomes costly when money, access, or operational responsibility are involved.

That’s why Rapp stressed starting small. Before giving an agent real authority — or something as simple as a credit card — leaders need firsthand experience with how difficult it can be to monitor, guide, correct, or even shut down an agent once it’s in motion.

The takeaway was clear: go try an agent yourself. Learn in a low-risk environment first. That hands-on experience builds the intuition leaders need before agents ever move into more serious settings.

6. AI is Fast. Your Strategy Better Be Faster.

Perhaps the most resonant theme was the blistering pace of change. Cloud-native agents, open-source LLMs, and new model releases are reshaping what’s possible almost weekly.

“As strategy creators, we’re often asked to predict the future,” Whitman says, “but our strategy probably changes every 12-18 months to keep up with the tech.  There are some foundational things that must be in place to make any investment worth it.”  

As Rapp put it: "What I said about Google six weeks ago is already outdated. You have to adjust for speed – constantly."

Execs agreed: 12-month planning cycles are dead. Now it's about:

  • Building agile strategy teams that can reassess bets every 6–8 weeks.
  • Prioritizing adaptability over precision.
  • Expecting disruption and engineering resilience.

At Launch, we help customers build resilient transformation roadmaps that are adaptable by design. We don’t anchor you to one model or platform; we equip you to evolve.

The Ultimate Executive Takeaway: AI's Success Depends on People

Whether it was empowering frontline teams, reducing toil, or rethinking the role of design and oversight, executives emphasized that human experience must shape every part of AI transformation.

Despite the tools, tokens, and tech, the conversation always came back to people. Rapp's example of working with an assembly-line veteran proved just that.  

That’s what AI is really for: unlocking understanding, enabling creativity, and building solutions that matter.

The leaders in our roundtable aren’t chasing hype. They’re orchestrating change thoughtfully, strategically, and always with humans in the loop.

Stay tuned. This is only the beginning of the enterprise AI conversation in 2026.

Ready to Take the Next Step?

If you're navigating your own AI strategy—or still deciding where to begin—Launch Consulting can help you connect the dots between technology, humanity, and purpose.

Connect with a Launch Navigator to begin your transformation.  

Let's build what comes next. Together.

Back to top
Launch Consulting Logo
Locations