Marine Link
Saturday, April 27, 2024

Using AI To Advance Engineering Analysis: Not More Data, More Physics

Maritime Activity Reports, Inc.

September 18, 2023

Copyright Kras99/AdobeStock

Copyright Kras99/AdobeStock

The goal of engineering analysis is to use models of the real world to simulate and predict the performance of a design with confidence, explore design modifications, and inform downstream stakeholders—the owners, builders, operators, and passengers—with knowledge that the design works as intended before it is built. 

To do so, we need models that characterize the physical world. That is easier said than done, but it underpins much of what we do as engineers. This is precisely why the cutting-edge of engineering analysis is driving toward high-fidelity simulations which capture the complex nature of the real world with a high degree of accuracy. And at present, the pursuit of high-fidelity simulations largely falls into two camps: the physics-based crowd and the data-driven machine learning crowd.

The former is more traditional: take physical laws, derive equations, and obtain solutions numerically. 

Think finite difference methods (FDM), finite volume methods (FVM), boundary element methods (BEM), finite element methods (FEM), and many other numerical techniques. In all cases, we are attempting to solve governing equations that are founded on physical principles. The latter, one might argue, is more experimental: utilize machine learning algorithms to build models based on observed or simulated data. Think artificial neural networks (ANN), deep learning, convolutional neural networks (CNN), long short-term memory (LSTM), and transformer networks, to name a few. The landscape of data-driven modeling is vast and rapidly changing, but the main assumption is that modeling can be accomplished using only data.

There tends to be strong bias when you talk to someone from one of the two camps. Physics-based people will suggest that data-driven methods are more hype than substance, and that there is over-confidence in the capability of machine learning models. Data-driven people may suggest that physics-based people are stodgy and old-fashioned. As with anything, we tend to like our own creation the most, and both sides have unique advantages and disadvantages. The truth of the matter is that neither approach can give us what we as engineers really want, which is fast and good.

For physics-based simulations, the appeal is that they tend to provide good answers. Because physics-based simulations are founded on physical principles, they are often more reliable models made up of meaningful mathematics. We can interpret them, understand their output and mechanisms, and modify inputs with confidence. If properly configured, physics-based models also can give very accurate results—again because they are founded on the laws of physics. The biggest limitation of physics-based methods, when it comes to high-fidelity, is that they are exceptionally costly. High-quality simulations require massive computational resources, and even if such resources are available, the amount of data that can be produced is quite limited.

A great example is an operability analysis of a ship in a seaway.

Assuming we only analyze five forward speeds, headings every 30 degrees, 10 different seaways, and use industrial-grade RANS CFD, the time it will take to finish the simulation is on the order of 15 years. The high cost of simulations makes high-fidelity methods practical only for specific cases.

On the other hand, data-driven methods tend to be very fast. Trained models evaluate quickly: a single seaway from the example above might only take seconds to compute. However, data-driven methods suffer from, ironically, a complete reliance on data. Continuing with the seakeeping example, it is necessary to make evaluations in a range of speeds, headings, and wave conditions. This means whatever model we use; it must work over a range of different input parameters. This property, sometimes referred to as transferability, is difficult to accomplish with purely data-driven models without a large training data set. It is not uncommon to need hundreds or thousands of training data samples over the entire range of expected input parameters. 

This becomes a considerable disadvantage in engineering applications, where we do not have much, if any, data to begin with, and therefore must generate the data using physics-based methods or model testing. It is common for machine learning researchers to show that if the data were available, the model could perform well, but data is almost never available to the quantity and quality required. Furthermore, the need for a large training data set means that despite the fast evaluation time of a data-driven method, the time to develop training data could be on a similar order of magnitude as a physics-based analysis. And of course, if we spend the time to generate data using a physics-based method, what is the value of the resulting data-driven model? To make data-driven methods viable, the training cost and the evaluation cost must be less than or equal to the evaluation cost of a physics-based method.

If physics-based methods are good, and data-driven methods are fast, it is natural to wonder if they could be used in tandem to make the “fast and good” models that will greatly benefit engineering analysis. But the solution takes more ingenuity than just training a neural network with a physics-based method. The reality is that training data requirements and a lack of transferability will make such an approach of little help to an engineer working on a novel design. Therefore, to achieve useful fast and good models, we need to take a more thoughtful approach.
Imagine that every governing equation—whether it be related to fluid flow, structural deformation, or dynamics—could be decomposed into two parts: a low-cost, low-fidelity part, and then a high-cost, high-fidelity part. In fact, this is a very natural idea in engineering: take the complex, close-to-life model, and make assumptions to yield a model that is tractable. Traditional methods, which still comprise much of the bulk of engineering analysis today, would simply throw out the high-fidelity part and just solve the low-fidelity part, and do so at a great cost savings. For many problems, the low-fidelity part also tends to be robust: it often captures most of the solution. But what if we modeled the high-fidelity part using machine learning? The result is a model which is composed of both physics-based and data-driven parts—a hybrid method.

While talk of generative AI and large language models rages on, when it comes to engineering analysis, the real gem is hybrid methods. Though the idea is very new—even to the research world—hybrid methods are demonstrating a few key characteristics: first, methods of this type greatly reduce the amount of training data required. In general, the amount of data required is a small fraction of what data-only methods need. Second, hybrid methods are almost as fast to evaluate as purely data-driven methods, because the low-fidelity physics part of the model usually requires minimal computational resources. Third, hybrid methods still yield solutions with accuracy similar to that of the training data. This means that if we use a small amount of costly high-fidelity simulation data or model test data, we can make similar-fidelity predictions with the resulting hybrid method. And lastly, but most importantly, hybrid methods are being shown to be transferable. Continuing with our seakeeping example, this means we can take that one seaway worth of high-fidelity data and make similar-fidelity predictions in other seaways. This is what makes hybrid methods incredibly powerful: they leverage data beyond its traditional limits.

Some might argue that the highest-fidelity, most sophisticated solution is not always the most profitable. With good engineering judgment, the low-fidelity, traditional approach can keep engineering costs low and still yield a relatively high-quality product. This notion is certainly one of the biggest hurdles when evaluating and adopting new methods, and hybrid methods are no exception. While it may be true that the benefit a hybrid method can bring to an analysis is limited to certain applications, there is also the ever-increasing demand for design performance: whether it be speed, seaway operability, survivability, safety, efficiency, serviceability, ease and economy of manufacturing, or environmental impact. It may not be unreasonable to expect that the most performant design requirements will benefit from the high-fidelity data-leveraging benefits of a hybrid method.

About the Author
Kyle E. Marlantes is a naval architect, software developer, and PhD candidate at the University of Michigan, where he develops methods to leverage data in engineering applications.


As published in the September 2023 edition of Maritime Reporter & Engineering News

Subscribe for
Maritime Reporter E-News

Maritime Reporter E-News is the maritime industry's largest circulation and most authoritative ENews Service, delivered to your Email five times per week