By: Zach Miller
Insurance isn’t what it used to be, and that’s exactly the point. In this exclusive interview, we sit down with Lahari Pandiri, Lead System Test Engineer at Progressive Insurance and a leader in AI-driven insurance transformation. Having spent almost ten years at one of the nation’s largest insurers, Lahari has observed the evolution of data systems from the inside out.
Her expertise spans everything from neural networks and deep learning to ethical automation strategies that are demonstrating potential in real-world applications. She talks about predictive models, digital fraud detection, and why ethical AI has become a necessity. Lahari’s ability to tie the logic of code to the complexity of human behavior makes this conversation worth your time.
Q1: Lahari, thank you for joining us today. Your work at the frontlines of AI-driven transformations in the insurance industry has reshaped how we think about risk assessment and fraud detection. Could you begin by telling us about your journey and how your passion for AI has influenced your approach to insurance?
Lahari Pandiri: Thank you for having me. My journey into AI and insurance has been driven by a deep fascination with how technology can address complex, real-world problems. With a background in AI, machine learning, and neural networks, I found a compelling intersection in the insurance industry—one that was ripe for exploration, yet grounded in decades of traditional practices.
I began exploring how AI could help tackle persistent challenges like fraud detection, underwriting inefficiencies, and risk profiling. Over time, my passion evolved into authoring research and contributing to the development of Agentic AI systems that enhance predictive accuracy while also introducing ethical and personalized solutions to insurance. My work now focuses on transforming insurance from a reactive to a proactive domain, where real-time data and intelligent automation can optimize claims, customize coverage, and improve customer trust and satisfaction. It’s an exciting frontier, and I’m proud to be part of shaping its future.
Q2: In your research article “Harnessing Agentic AI for Predictive Risk Assessment and Fraud Detection in Insurance,” you detail how deep learning models personalize coverage and optimize claims. Could you elaborate on how these personalized AI systems are tested for fairness and accuracy across diverse customer segments?
Lahari Pandiri: Absolutely. When developing personalized AI systems in insurance, particularly those leveraging Agentic AI and deep learning, fairness and accuracy are not just technical goals—they’re considered essential. To ensure these systems serve diverse customer segments equitably, we implement a multi-layered validation framework.
First, we use stratified data sampling across demographic, geographic, and socio-economic segments to reduce the risk of biased model training. Then, during model evaluation, we deploy fairness metrics like demographic parity, equal opportunity, and disparate impact analysis to identify and address bias.
In parallel, accuracy is maintained through rigorous cross-validation and performance benchmarking against traditional actuarial models. We also conduct real-world simulation testing to assess model decisions in varied claim scenarios. Ongoing audits and post-deployment monitoring help us adapt these systems continuously, ensuring that personalization never comes at the cost of inclusivity or transparency.
Q3: From your time at Progressive to your academic contributions, you’ve worked extensively on integrating intelligent automation into property and casualty insurance. What are the most pressing infrastructure or data challenges in scaling these AI-powered solutions across large insurance portfolios?
Lahari Pandiri: One of the most significant challenges in scaling AI-powered solutions across large insurance portfolios is data fragmentation. Property and casualty insurance relies on vast and varied datasets—from telematics and climate data to claims history and third-party assessments. Ensuring data consistency, quality, and interoperability across systems is a persistent hurdle.
Another challenge is the legacy infrastructure that many insurers still operate on. These systems were not designed to support real-time analytics or AI integration, which makes scaling intelligent automation both technically and financially complex. Additionally, implementing AI at scale demands robust data governance, secure cloud infrastructure, and seamless API-based connectivity to enable real-time decision-making.
At Progressive and in my broader research, I’ve focused on developing hybrid architectures that blend traditional systems with modular AI components. These allow for gradual transformation without disrupting core operations. Investing in flexible data pipelines, cloud-native platforms, and explainable AI has been key to overcoming these challenges and enabling scalable, responsible innovation.
Q4: Your paper, “Machine Learning-Powered Actuarial Science,” suggests a radical shift in underwriting and policy pricing. How do you foresee regulatory frameworks adapting to such dynamic and predictive models, particularly in high-stakes domains like life and health insurance?
Lahari Pandiri: That’s a critical question. As machine learning transforms actuarial science—introducing dynamic pricing, behavioral risk modeling, and real-time underwriting—regulatory frameworks will need to evolve in parallel to maintain fairness, transparency, and consumer protection.
I foresee a shift toward regulatory co-design, where insurers and regulators collaborate to define acceptable AI use cases, auditability standards, and fairness thresholds. One area of focus will be explainable AI, ensuring that decisions, especially in life and health insurance, are interpretable and justifiable to both regulators and customers.
Additionally, I expect more frequent model validations and the introduction of “model risk management” policies, similar to financial services, where regulators scrutinize not just outcomes but also input data, training methods, and feedback loops. Privacy laws like HIPAA and GDPR will also shape how predictive models access and use sensitive data.
Ultimately, regulatory evolution will need to balance innovation with accountability. If done thoughtfully, it can empower insurers to personalize offerings ethically while maintaining trust and compliance in high-stakes domains.
Q5: As someone leading AI innovation in the insurance space, particularly at Progressive Insurance, how has your role as Lead System Test Engineer shaped your perspective on merging traditional insurance practices with emerging technologies like Agentic AI?
Lahari Pandiri: My role as a Lead System Test Engineer at Progressive has given me a unique vantage point at the intersection of system integrity, regulatory compliance, and technological innovation. It’s taught me that successful integration of emerging technologies like Agentic AI doesn’t just depend on model performance—it hinges on how well those systems are tested, validated, and aligned with the real-world workflows of insurers.
I’ve learned to appreciate the nuances of traditional insurance processes, from underwriting and claims adjudication to risk pooling and regulatory review. This understanding allows me to embed AI in ways that enhance rather than disrupt those foundational practices. For example, Agentic AI can automate decision pathways and adapt to evolving policyholder behavior, but it must also work seamlessly within established auditing and compliance protocols.
Testing at scale, ensuring model explainability, and simulating edge cases have all become critical components of how I approach innovation. In many ways, my engineering background has grounded my AI work, ensuring it’s both transformative and practical.
Q6: As someone deeply involved in AI for fraud detection, could you share some of the most significant developments in this area that are helping insurers minimize fraudulent claims while ensuring fair coverage for all?
Lahari Pandiri: Absolutely. One of the most exciting developments in AI for fraud detection is the integration of real-time behavioral analytics with deep learning. These systems can now detect subtle anomalies in claim submissions, such as linguistic cues, timing inconsistencies, or unusual geographic patterns, which far exceed what rule-based systems could catch.
Another major advancement is the use of graph neural networks (GNNs) to uncover hidden relationships between entities, like claimants, repair shops, and prior incidents. This helps insurers identify organized fraud rings and recurrent abuse with improved precision.
We’re also seeing the rise of explainable fraud models that allow human investigators to understand why a claim was flagged. This transparency is key to ensuring that legitimate customers are not unfairly penalized.
Finally, hybrid systems that combine supervised and unsupervised learning are proving effective. They catch both known fraud patterns and novel, evolving tactics. By integrating these tools within ethical frameworks and ongoing oversight, insurers can find a balance between fraud minimization and equitable coverage.
Summary
Lahari Pandiri is here to change the way we think about insurance. From auto to home policies, her ideas show how AI can support smarter, fairer decisions that benefit both companies and customers. She emphasizes that innovation should never come at the cost of ethics, especially when dealing with people’s safety and finances.
Lahari’s insights bring sharp awareness, a grounded perspective, and years of experience solving real problems. The future of insurance is already in motion, and people like Lahari are actively shaping it. If the goal is balance between intelligence and integrity, this interview shows that it is indeed happening. And the industry is better for it.