FORT sat down with Alex Foessel of Balanced Engineering to get his take on automation system functional safety for off-highway and heavy equipment. Important considerations go beyond the machine's core functionality, including maintenance, transportation, and configuration.
Alex Foessel, along with his business partner Rick Weires, founded Balanced Engineering two and a half years ago and built a team consisting of mostly ex-John Deere employees–all with one thing in common: their passion for making safety top of mind in the design process to develop and commercialize smart equipment.
Balanced has been leading the charge in system architecture and autonomous system design for companies in agriculture and other off-road equipment applications, with a strong emphasis on prioritizing safety in the development of automated off-road products.
In this Q&A, Alex provides insight into key considerations for OEMs when designing these systems, including intuitive interaction with bystanders and collaborators, mapping out the entire process from job execution to maintenance, and conducting a thorough risk assessment at an early stage.
Alex Foessel: Let’s start with the interaction with operators and bystanders. For a product to be safe, it is essential that it has an intuitive, clear way to interact with the bystanders and collaborators. So having, what is commonly called, automation state and intention communication are truly important.
If you're not planning from the beginning how the machine is going to interact, the usability, the user experience, and what cues the machine is going to communicate—i.e., what it does in an off or on state—then you’re not enabling the communication between humans and the machine in a fundamentally safe way.
Safety needs to start in product design, even when it appears to slow you down.
Put yourself in the shoes of the CTO of a Startup.
Let’s say that you have received Series A funding from investors, so you have limited money, and you need to deliver the product functionality, prove the value proposition, and show progress to the investors.
Because of this pressure, you’d put aside everything not in the direct path to deliver that functionality, and this is where I think many make what could be a vital mistake
When you finally get to the point where you can show all the functions, if you have not been thinking about the risks, architecture robustness, the way the code is written, etc., then you have a high risk of having to start again.
You would not have established the performance level in the sense of how critical the risks and the potential hazards around this machine are. By the end of the build, you'll have to start it almost from square one, as your machine will most likely not conform to industry functional safety standards.
At the same time, if you bring all aspects—rigor, the formality of safety, or functional safety standards— too early in the project, you can kill your project as you cannot show the functionality and enable further funding.
That is why a balanced approach that brings safety considerations in a gradual way, without compromising deadlines and budgets, while building safety from the start, is so necessary–it’s the reason for the name of our company.
AF: I mentioned a balanced approach to integrating functional safety very early. But even if you do, there are areas that require extra care.
One key area is the approach to risk assessments.
We have found that some risk assessment tools in the standards are coarse and have little resolution.
If the risk assessment is too coarse, you may end up with a higher than necessary safety level of performance, and your development will be too expensive.
So normally, you would find people saying, well, ‘we need to get the maximum safety,’ I think that is a mistake in a way. We need to get to the appropriate level of system design rigor considering the hazards and risks.
Comparably, it's often a mistake just to take the components of software written for automotive and think you can apply them to off-road applications. In addition to the functionality not being the same, you're also going to bring so much cost into your product.
In other words, if you are putting the same level of rigor in writing software, for, let's say, a tractor that goes slowly in a very open field with almost no one there, compared to the rigor and quality of software that is going to drive a fast truck on a highway with many, many more people and many other vehicles, you are going to bring expensive components and a level of redundancy that is not required for a tractor in this instance.
My advice? Use a proper risk assessment and be sure you're not overthinking or over-specifying to meet a different risk scenario that can put your product out of competitiveness. If your product is too pricey in the market, inevitably, you will delay bringing safe products due to financial challenges.
AF: If you want to claim that your product does not expose bystanders, operators, and other people to unreasonable risks, well, how do you plan to support that claim?
We talk about five key elements that can sustain or provide the foundation for functional safety. I’ll credit the team in Uber ATG, now part of Aurora, for having produced this easy-to-understand safety case at this high level.
Table courtesy of Balanced Engineering.
The first is nominal operation– to begin with, is your self-driving machine acceptably safe during nominal operation?
Then it has to be acceptably safe even when the system exhibits failures. The functional safety piece comes in if you have degradation– i.e., something fails, your system has a problem, your software has a glitch, or your camera fails for another reason–you want to ensure the machine is acceptably safe even when it is degraded.
The next element is continuously improving.
For large end producers or companies with large fleets, you need to be able to manage these large fleets, and whenever you're learning of new risks, you need to have these processes or a set of processes for continuous improvement.
The self-driving machine must be acceptably safe in case of reasonable, foreseeable misuse and unavoidable events. We need to put some thought into that. The machine is going to be used in ways different than what we expect.
For instance, a machine trained with AI to do X in a field in Iowa will be bought by a Brazilian or African customer and get into a completely different context of operation.
We should know that that may happen, and if it does, will the system still be safe?
And finally, this is critical: a company needs to be trustworthy.
In other words, you need to show that you have governance, that you have a way to deal with issues, and that the issues are not being hidden.
You need to start your development by saying, 'what are the key elements in the product and beyond the product that we are going to work on to sustain or support the claim that we have an acceptably safe machine?'
This is an emerging area of concern. Normally we would expect the operator to deliver the expected job performance or quality level. The more machines become automated, the more they are exposed to a performance or quality liability gap.
Let me explain.
If your customer buys into the value proposition of a certain level of reliability or operational uptime, that creates a real expectation.
If you do not deliver– let's say the machine breaks or does not work in the conditions, or the machine is simply not performing to the level it should– then that customer will be unhappy, to say the least.
Let me give you a real example in agriculture.
If I have a very narrow planting season and the machines break, or the machines do not function and time goes by, that means that I'm going to be planting much later, which will reduce my yield, and I will not make the money that I expected. Right?
So that lack of performance or job quality–I'm talking about operational uptime or mechanical uptime–is going to eat into the potential for revenue and profit of your customer.
When a machine does not deliver on that, it is a bad investment for the customer, and worse yet, there can be real financial losses for the customer due to poor automation performance.
It goes back to the trustworthiness of the company. Are you trustworthy?
Most respected brands will go out of their way to make sure that the equipment has mechanical and operational availability. The more functions that depend on the automation, the more the liability may fall on the machine, not the operator–if there even is an operator.
Let me go into a very different example: startups.
Let’s imagine a startup that has been working on innovative robotic technologies to support the harvesting of grapes. Typically the focus would be on working hard all the way to a functional product. Yet, the product does not have the proper risk characterization or documentation that conforms to known standards.
So this company could make a sale, but often customers, especially those with a safety team that works with OSHA, would not approve the use of this technology. That is terrible for a startup expecting machines to work and their customers to be happy and buy more.
Instead, you could prohibit usage because the risks were not well characterized and the documentation was not provided.
This is where the experience of our [Balanced] engineers and safety experts come in for startups— by providing a balanced approach that produces a safe product that has the documentation to allow worker safety teams to approve the adoption of these cutting-edge tools in the orchards, for instance, with intensive worker presence.
Due to scarce development resources and the innovative nature of the efforts of small companies, it’s even more important to start with a gradual, balanced consideration for product safety.
AF: First, I want to credit the agriculture industry and many others for what they are doing.
The participation in standards is very good in off-road, construction, and agriculture– and others are working hard, like mining, to ensure that they bring their best practices in safety and get them on paper.
I would love to help other companies to really make functional safety and product safety top of mind by embedding those standards into product development.
In the last five years, we have seen a tremendous advance in using components related to AI being integrated into existing systems.
At Balanced Engineering, we are anticipating the future by collaborating with partners to create and adapt tools for safely integrating machine learning modules into automated systems. Our goal is to ensure that these systems behave correctly, prioritize safety, and avoid making mistakes.
When it comes to an embedded computer with code, I can sit down and have two people look at every line of code and ensure that the SW leads to a safe product because the code has integrity, quality, etc.
Within some AI modules, I don't have code anymore. Instead, there is a neural network. I have hundreds of thousands or millions of numbers–no army in the world can inspect that and tell me whether it works well.
That's where we're hedging our bets. Even though we do not develop technology, we want to develop the methodologies, standards, practices, and tools so companies can perform these integrations with the assurance that when they sign a certificate, the product has been built to be acceptably or reasonably safe.
For more information about Balanced Engineering, click here to visit their website.
Safety Check is an ongoing Q&A series with experts in robot safety, standards, and more. Check out this edition featuring Carole Franklin, Director of Standards Development at A3.
Want to learn how FORT's wireless safety solutions can help you comply with robot safety standards? Schedule a consultation with one of our safety experts.