Predictive maintenance is not a new concept, of course. Neither is the use of ultrasonic technology out on the floor. The combination of the two, though, especially as machine intelligence continues to improve, remains a relatively novel concept.

Amnon Shenfeld and his team at 3DSignals — another newer company based in the IoT hotbed of Israel — is focused on exactly that combination, blending their proprietary ultrasonic hardware and software with the manufacturing and industrial space. They want to be able to detect when a machine’s effectiveness might start to be fading based on little more than how it sounds. “What we’re doing,” Shenfeld said, “is essentially developing a mechanical ear.”

About two years ago, Shenfeld gathered some friends — some fellow engineers with curious minds — and discovered that “while computer vision has advanced during the last decade, especially around subjects like deep learning, and while a lot of research had been done to create algorithms to identify visual objects, not so much had been done with the acoustic landscape.” Far more time and resources had been poured into advancing speech recognition — think Apple Siri and the new HomePod, and Amazon Home. Sound and acoustics? They were silent. (Pun very much intended.)

The system they developed uses airborne ultrasonic sensors that detect “anomalies in the sound that machines make while running.” The deep learning and predictive analytics software then processes those sound signals, sifts through the data and sends out alerts to users. Over time, the system recognizes certain sounds, picks up on certain cues, similar to when you hear your child over dozens others in a crowd.

The result is another tool to keep machines, production lines, factory floors, even whole supply chains moving forward.

One of 3DSignal’s first major projects using the technology involved an Israeli steel production plant. Over six months, Shenfeld and his team developed a set of algorithms and tools to realize — in real time — the condition of saw blades cutting into products on the production line. “It was a major pain point for the plant,” he said. Blade failure was, in fact, the plant’s biggest problem, “because there are no technologies capable of telling you about the condition of a rotating blade while it is working.”

The raw material vibrated while it was being cut, creating an unstable signal, reading temperatures was unreliable because most cutting was done cold in liquid, and voltage monitoring on the engine running the blade also created variations in readings. “There are no real solutions to warn you that you’re nearing the end of the life of this blade, which is only supposed to last a couple thousand cuts,” Shenfeld said. “If you miss the replacement, you could have a catastrophic break event.”

The 3DSignals team was certain it could train its new algorithms to identify blades running at different speeds, or operating differently from others — and that those algorithms will eventually outperform any trained ear on the floor.

“Maintenance engineers, mechanical engineers, they have these intimate relationships with machines, like some people have with their cars,” Shenfeld said. “Whenever something goes out of tune, the first instinct is, ‘Hm, that doesn’t sound right.’ We wanted to replicate this human brain capability to recognize something out of the ordinary and the sound that machines make in order to identify first an anomaly, and later, like an expert mechanic, classify exactly what is wrong by the sound signature that is coming out of machines.”

3DSignals has plans to work soon in the steel and energy sectors across the United States, with live microphones deployed in all sorts of American plants. “We’re looking to move into oil and gas, mining, chemicals and petrochemicals — any sort of manufacturing industry,” Shenfeld said. The company is now working with its first gas-powered turbines.

“There are a lot of self-sustained decisions being made by machines, and it’s obvious there will be more and more of these decisions as time progresses and the machine learning process becomes more sophisticated,” Shenfeld said. “The way to enable these machines to make the right decisions is to give them not only the computer power — not only a brain — but also the input that will facilitate the decision-making of a capable brain, or of capable decision-making algorithms.

“I believe that sound is a white space. It hasn’t been utilized, especially with learning algorithms.”