Explainable AI for Autonomous Ship Navigation Aims to Increase Trust

April 16, 2025

The Titanic sunk 113 years ago on April 14-15 after hitting an iceberg, with human error likely causing the ship to stray into dangerous waters. Today, autonomous systems built on artificial intelligence (AI) could help ships avoid such accidents, but could such a system explain to the captain why it was maneuvering a certain way?

That’s the idea behind explainable AI which should help human actors trust autonomous systems more.

Credit: Yoshiho Ikeda, Professor Emeritus, Osaka Prefecture University
Credit: Yoshiho Ikeda, Professor Emeritus, Osaka Prefecture University

Researchers from Osaka Metropolitan University’s Graduate School of Engineering have developed an explainable AI model for ships that quantifies the collision risk for all vessels in a given area, an important feature as key sea-lanes have become ever more congested.

Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto created an AI model that explains the basis for its decisions and the intention behind actions using numerical values for collision risk.

“By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers,” Professor Hashimoto stated. “I also believe that this research can contribute to the realization of unmanned ships.”

The findings of their research were published in Applied Ocean Research.

Related News

SLB, Shell to Accelerate Digital Solutions for Upstream Operations GE Aerospace Secures Orders for LM2500 Marine Gas Turbines to Power Arleigh Burke Flight III Destroyers Cochin Shipyard to Build Electric TRAnsverse Tugs for Svitzer NYK Orders Car Carrier Equipped with Next-Gen Marine DX Systems Advanced Polymer Coatings Signs Deal to Coat Two Methanol Tankers with Fratelli Cosulich