Recently I’ve been pondering a question that’s on a lot of underwriting leaders’ minds right now: 

If AI is getting so good, where do humans actually still add value?  It’s provocative, but I believe it’s the wrong question. The better one is: Where does human judgment improve the outcome? 

The reality is that AI isn’t replacing underwriting judgment. Instead, t’s revealing where that judgment actually matters. And just as importantly, it’s revealing something that workflows have always mattered, and they still do in the age of AI in insurance underwriting (maybe even more so).  

 

Automation Is Winning the Middle, But Underwriting Was Never About the Middle 

Across AI in insurance underwriting workflows, AI performs exceptionally well in the predictable center of the portfolio. Models thrive with structured data, clean submissions, and clear rules—and that’s exactly where they should thrive. But underwriting has never really been about the middle. It’s always been about protecting the edges.The losses that hurt portfolios don’t typically come from straightforward risks that fit neatly into a model. They come from: 

  • Unusual exposures 
  • Incomplete or misleading data 
  • Emerging risks that don’t yet have historical patterns 

And that’s exactly where human judgment earns its keep. 

 

Where Human Judgment Still Wins 

When you look closely at AI in insurance underwriting operations, three areas consistently stand out. 

  1. Ambiguity

AI depends on structured, complete data. Underwriters deal with reality and reality is often messy: missing information, inconsistencies, or narratives that don’t quite line up. 

Experienced underwriters know how to: 

  • Ask better questions 
  • Spot gaps 
  • Interpret what’s not being said or provided 

 

  1. Behavioral Context

Data tells you what happened, but underwriters interpret why it happened and whether it will happen again. That context often lives outside the dataset. 

 

  1. Edge Cases

Underwriting performance is defined by how well you manage the outliers. We recently saw a case where an automated model approved a submission instantly because it met every rule, and yet a human reviewer flagged it anyway. Something about the risk didn’t add up, and they were right. The exposure had changed in a way no dataset had captured.  

 

AI Didn’t Make Workflows Important. It Made Them Visible 

There’s a tendency right now to treat workflow design as something new or something driven by AI. It’s not. Workflows have always determined underwriting outcomes. 

They’ve always dictated: 

  • How work moves 
  • Where decisions get made 
  • Where risk is introduced or caught 

What’s changed is that AI works so fast its making those workflows (especially ineffective ones) impossible to ignore. When a model is making decisions instantly, any inefficiency, gap, or ambiguity in the process becomes immediately visible and often amplified. 

We see this all the time. 

  • If the workflow is well-designed, AI accelerates it. 
  • If the workflow is fragmented, AI just helps you make mistakes faster. 

The Real Risk 

Most insurers aren’t struggling with whether to use AI in insurance underwriting. They’re struggling with how to design workflows around it. 

Too often, we see: 

  • Humans reviewing outputs without clear purpose 
  • Checkpoints that exist for compliance, not impact 
  • Underwriters reduced to rubber-stamping decisions 

That doesn’t improve outcomes. It just slows things down. 

 

Where Experience Actually Shows Up: Workflow Design 

This is where operational experience matters. At Covenir, we’ve worked across carriers, MGUs, and insurtechs, implementing underwriting and operational workflows in a wide range of environments. Over time, I’ve seen a consistent pattern: AI can’t help bad workflows. The operations that work are those that have workflows that already: 

  • Clearly define where decisions happen 
  • Route complexity to the right level of expertise 
  • Create visibility into how outcomes are produced 
  • Make it easy to intervene when something doesn’t look right 

In other words, the best outcomes happen with workflows that were designed for performance before AI and are now being adapted to take advantage of it. 

Designing AI-Human Workflows That Enable Accountability

Fortunately, because of where we sit helping so many insurers with their underwriting operations, we can share best practices on the most effective workflows that can work hand-in-hand with AI efficiency. And because of our experience, we can help pinpoint where issues are occurring to help with one of the biggest challenges which is accountability.

As AI becomes more embedded in underwriting, accountability gets harder, not easier.  When decisions span models, vendors, and internal teams, responsibility can become diffuse. We think about accountability in three layers: 

  • Design: Who decided how the workflow operates? 
  • Execution: Was the process followed correctly? 
  • Model: Are the limitations understood and monitored? 

 We’ve found that this helps to more easily pinpoint issues when they arise.  

Final Thought 

AI is transforming underwriting, but humans still have a lot to contribute to the process.  If you’re evaluating how to evolve your underwriting operations and looking for an outsourcing vendor who can help you WOW your policyholders, contact us.