is the global resource and leader in systems modeling and simulation, bringing the worlds of systems engineering and computer aided engineering together.
CIMdata and SMS_ThinkTank™ recently came together to host a webinar on the topic of Systems Modeling and Simulation (SMS).
Webinar attendees learned about:
There were several questions that we did not have time to answer live. Some of the answers are provided below. If you were unable to join the webinar, you can watch the webinar here.
We hope this post will generate further discussion that enriches our collective understanding of this topic and would appreciate hearing from you. If you would like to learn more, you might want to think about taking one of our SMS Training and Certificate programs.
Best wishes.
Don Tolle, CIMdata, Frank Popielas, SMS_ThinkTank & Ed Ladzinski, SMS_ThinkTank
1. What areas should be focused on while doing design capability assessment of an organization so that it will help the organization elevate to the level of Design4.0?
As we mentioned during the webinar, we are always focusing on three major categories: Organization, Process, and Technology. Within these categories, we have several capabilities defined that need to be evaluated. Key criteria characterize each capability. This is how we build our CMMI (Capability Maturity Model Index) models that are specific to the targeted focus area, such as Systems Modeling and Simulation, CAE, Digital Twin, Industry 4.0, or Design 4.0 to name a few.
The main focus should always be on the organization and the process. Technology will automatically fall into place when the other two categories are taken care of properly. Too often still, technology is driving the processes and organizational behavior.
You may be interested in taking a look at a recently published a paper “Simplifying MBSE – An Implementation Journey”. Quite a few touchpoints mentioned in that paper apply to Design 4.0.
2. Can you comment on Generative Design in relation to Modeling Simulation?
The concept of generative design (GenDes) is rapidly gaining the attention of industry as well as all of the major MCAD and S&A solution providers. GenDes proposes the use of algorithmic methods to develop designs (geometry and material selection) based on requirements and constraints. It often leverages topology optimization in concert with additive manufacturing techniques to effect or develop a new product design process. Generative designs are often, but not always, based on simulations of the structural physics. Examples include topology and shape optimization, parametric rule-based CAD, and optimized truss design using a catalog of available beams.
Traditional development processes generate CAD based mainly on experience and previous design. Then, when the design is complete, simulations or tests are used to evaluate the design’s performance against its requirements.
The GenDes approach depends much less on previous designs and experience than does the traditional design process. GenDes turns the traditional process of design, build, and evaluate on its head; in the GenDes vision, all possible designs need to meet the requirements. Topology optimization is an important tool for enabling GenDes. Shape optimization was developed during the 1980s and topology optimization a decade later. These techniques typically optimize geometry to meet strength, stiffness, or other physics-based requirements while minimizing weight.
Dr. Keith Meintjes, CIMdata Fellow & Executive Consultant, is one of the industry’s thought leaders in the area of Generative Design and is currently actively involved with the NAFEMS organization to provide educational materials on this field. CIMdata has also published several documents related to Generative Design and other related topics such as Generative Engineering.
3. Do you see any particular challenges in the context of Systems Engineering that are different/specific to the Aerospace & Defense Industry (as opposed to say, Automotive)?
Aerospace & Defense has been practicing many Systems Engineering best practices for several years, if not decades. The development of aerospace and aeronautic products has taken upwards of 8-10 years with very concise requirement authoring, management, and strict linkage to verification test cases. This process involves the development of a detailed systems architectural plan based on stakeholder requirements that are then decomposed into many different types of requirements, including regulatory, which are extensive and concise. There are many other requirements groupings such as functional, performance, etc. These industries have very few opportunities to get it right, dedicating considerable time and resources to ensure the exact performance characteristics are achieved while guaranteeing safety and reliability.
Let’s take a look at a specific use case, courtesy of Jamie Lynch on: https://www.bugsnag.com/blog/bug-day-ariane-5-disaster
“On June 4th, 1996, the very first Ariane 5 rocket ignited its engines and began speeding away from the coast of French Guiana. 37 seconds later, the rocket flipped 90 degrees in the wrong direction, and less than two seconds later, aerodynamic forces ripped the boosters apart from the main stage at a height of 4km. This caused the self-destruct mechanism to trigger, and the spacecraft was consumed in a gigantic fireball of liquid hydrogen.
The disastrous launch cost approximately $370m, led to a public inquiry, and through the destruction of the rocket’s payload, delayed scientific research into workings of the Earth’s magnetosphere for almost 4 years. The Ariane 5 launch is widely acknowledged as one of the most expensive software failures in history.
The fault was quickly identified as a software bug in the rocket’s Inertial Reference System. The rocket used this system to determine whether it was pointing up or down, which is formally known as the horizontal bias, or informally as a BH value. This value was represented by a 64-bit floating variable, which was perfectly adequate.
However, problems began to occur when the software attempted to stuff this 64-bit variable, which can represent billions of potential values, into a 16-bit integer, which can only represent 65,535 potential values. For the first few seconds of flight, the rocket’s acceleration was low, so the conversion between these two values was successful. However, as the rocket’s velocity increased, the 64-bit variable exceeded 65k, and became too large to fit in a 16-bit variable. It was at this point that the processor encountered an operand error and populated the BH variable with a diagnostic value.
In layman’s terms, this can be thought of as attempting to fit 10 million liters of ice cream into a camping fridge on a hot summer’s day. It’ll be fine for the first few tubs, but after a certain threshold, you’ll be unable to fit anything else in, the fridge door will be stuck wide open, and everything will start melting really, really fast.
The backup Inertial Reference System also failed due to the same error condition, meaning that at T+37 the BH variable contained a diagnostic value from the processor, intended for debugging purposes only. This was mistakenly interpreted as actual flight data and caused the engines to immediately over-correct by thrusting in the wrong direction, resulting in the destruction of the rocket seconds later.
Several factors make this failure particularly galling. Firstly, the BH value wasn’t even required after launch, and had simply been left in the codebase from the rocket’s predecessor, the Ariane 4, which did require this value for post-launch alignment. Secondly, code which would have caught and handled these conversion errors had been disabled for the BH value, due to performance constraints on the Ariane 4 hardware which did not apply to Ariane 5.
A final contributing factor was a change in user requirements - specifically in the rocket’s flight plan. The Ariane 5 launched with a much steeper trajectory than the Ariane 4, which resulted in greater vertical velocity. As the rocket sped to space faster, there was a higher certainty that the BH value would encounter the conversion error.
Ultimately, the European Space Agency assembled a team to recover logs from the two Inertial Reference Systems, which were spread over a debris field of approximately 12 square kilometers. Their work was impeded by treacherous marshland terrain, hazardous chemicals dispersed from the rocket, and immense public scrutiny from the media, all because of a single type casting error.”
The automotive industry typically develops new models in less than two years. Only recently has this industry-recognized the fact that the exponential growth of consumer features is spawning enormous product and process complexity. The once followed ‘reactive’ practices of build, test, break, redesign are no longer sustainable. The great challenge for this industry is to understand the need for a structured, holistic transformation with dedicated systems engineering resources and more concise requirements that follow a logical process through the engineering “V”. This is typically foreign to this industry, with management not understanding the need and benefits of proper systems engineering adoption. The need to remain competitive will become one of the major drivers within automotive to adopt a systems engineering approach involving the transformation of the organizational, process, and technological areas across the enterprise.
4. How would toolchain look like which you would suggest for your customers regarding: Definition of System Architecture --> Translating System Architecture to Simulation Architecture --> Verifying/Simulating said Architecture?
CIMdata and the SMS_ThinkTank do not endorse any solution provider’s tools, so let us answer this more generically. Many tools can capture stakeholder needs and wants. These same tools are also able to identify the high-level system requirements and then decompose these different types of requirements (functional, performance, regulatory, etc.) and assign them to the various test cases for verification and eventual validation. If carefully planned, these hierarchies of requirements and tests can be made traceable for future analysis. Typically, system architecture is created using another solution (e.g., SysML). Various bridges were created to transfer these System Architecture modeling tools with simulation tools (0D, 1D-4D). The maturity of these data hand-offs is dependent upon the applications being developed.
In many cases, this ability is in its infancy. One must also consider the transfer of information between various verification silos (mechanical, electrical, electronic, thermal, etc.). FMI is gaining considerable ground in this area but is not as complete as one may be led to believe. With FMI 3 on the horizon, it is a wait and see game at this point. This becomes even more complex when introducing HiL, MiL, SiL, as more logical applications are typically handled by dedicated solution providers, once again. The challenge for everyone is to carefully plan the systems engineering journey knowing that one solution provider cannot do it all. Existing and emerging standards must be investigated to ensure consistent, trusted, and readily available data form the backbone of the systems engineering journey.
5. Does Tesla use Systems Engineering already?
To bring a vehicle, such as the Tesla, to the market and provide regular over-the-air software updates, the basic elements of systems engineering must be in place, especially when focusing on embedded systems. Also, their manufacturing is quite automated, which requires sophisticated process management capabilities.
The question, however, is how well is MBSE as an enterprise-wide practice established. This is where we believe work still needs to be done. In our opinion, there is still too much traditional focus on technology (also from a systems engineering perspective) and not so much on the process and business aspect of it.