Software: one of the hidden heroes of the Perseverance mission

Taylor Armerding
6 min readMar 1, 2021
NASA/JPL-Caltech

Getting a spacecraft all the way to Mars — a trip of about 300 million miles — and landing a rover vehicle on the equivalent of a postage stamp target obviously requires sophisticated technology and a team effort.

Because, as is also obvious, it’s unbelievably risky and complex. But the team at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California got it done. Americans saw images beamed back to earth from Perseverance on Feb. 18, just hours after it landed safely in Jezero Crater, which scientists believe was a river delta and lake 3.5 billion to 3.9 billion years ago. Video of the landing followed just a few days later.

And while the most glamorous and compelling visuals of an event like this may be fiery, roaring rockets, the futuristic rover itself or the desolate surface of an alien planet, an indispensable tool in getting Perseverance there was software — those endless strings of letters, numbers, and symbols that look like gibberish to most of us but that are the primary enabler of modern life and, yes, space travel.

So there’s a certain amount of heroism due those who wrote and tested that software.

It was software that got the spacecraft through the final so-called “seven minutes of terror,” when it had to slow from 12,000 miles per hour to about 2 mph, navigate to the landing spot, lower the Perseverance to the surface with a “sky crane” from about 66 feet up, then disconnect the cables and fly off to crash land a safe distance away — all without any oversight from Pasadena, since there’s about an 11-minute delay in any communication from that distance.

No second chances

That takes something pretty close to perfection. There are no second chances. Failure would have meant $2.7 billion and years of work down the drain.

And while the U.S. has successfully sent nine robot missions to Mars, beginning with the Viking probes in 1976, Matt Wallace, the rover’s deputy project manager at JPL, said at a press conference the day before the landing that almost half the exploratory missions sent to Mars have failed.

That means the software has to be just about perfect as well. Especially during stages like the seven minutes of terror. At that point, the humans just pray they got everything right. And this time, they did.

James Croall, technical product management director with the Synopsys Software Integrity Group, would know. He worked at Coverity® (since acquired by Synopsys) in 2011 when the company provided static application security testing (SAST) of the software in NASA’s Curiosity rover mission to Mars. A SAST tool is an automated way of finding bugs and other mistakes in code as it’s being written or assembled and before it’s running — static, in other words.

Curiosity, which landed in August 2012, is still functioning.

And Croall said the success of both Curiosity and now Perseverance really are stunning achievements. “It has always blown my mind that space missions are so precise and can operate so smoothly,” he said. “I can’t imagine the experience of working on the project and sitting through the seven minutes of terror.”

But he said the testing of the software for missions like these is more intense and comprehensive than anything for earthbound use.

For starters, NASA doesn’t use just one SAST tool. It uses every quality tool on the market. “Different analysis engines find different things,” Croall said. “Fundamentally they are written differently, so even though we all find a lot of the same resource leaks etc., due to different trade-offs for performance, memory, and so on, we also find a lot of different things. So in the most severe mission-critical application, it would make sense to run not just different kinds of testing but different tools.”

Keep it simple

Another element of achieving virtually perfect software is to keep it short and simple.

Of course, short is relative. The software for the Perseverance mission has about 2 million lines of code — just about the same that it took for Curiosity almost a decade earlier. It may sound like a lot but compared to some other earthbound and seemingly far less sophisticated systems it’s miniscule.

A passenger jetliner can have 7 million to 25 million lines of code. The average modern vehicle has 100 million lines and the new Ford F150 pickup truck has 150 million.

But Croall said most of those tens of millions of lines in a vehicle are “skewed toward infotainment and components that aren’t so mission-critical. Things that are controlling windows, doors, and engine functions are typically pretty tight and small.”

Also, the autonomous features in a car are necessarily much different from those in a spacecraft. Obviously, the Perseverance mission didn’t need to deal with risks from other vehicles, traffic controls etc.

“With complexity comes risk and a greater chance for mistakes,” Croall said.

He said there are parallels with aerospace and defense systems. Any airplane flying in the U.S. has much of its avionics and control software written in Ada, a high-level programming language designed for developing sophisticated software systems.

“That helps avoid common programming mistakes,” he said. “Ada imposes many rules and helpful patterns on the developer so that the common failure modes can’t happen or can be recoverable.”

Croall said that at some point every line of the codebase has to be tested and every condition exercised — known as MC/DC, or modified condition/decision coverage.

“Every decision and criteria has to be documented and tested to understand what would happen in a failure and if all the failure modes are handled smoothly. It is truly next-level obsession for safety,” he said, adding “I imagine similar practices go for the NASA landers. Building this kind of software really and truly is a different art than building anything else on the planet.”

Hacking in space?

Of course, even the best software testing in the world doesn’t guarantee perfection. Which raises the question of whether a mission like this could be vulnerable to hackers. It’s possible, after all, for ground control to send commands to orbiting spacecrafts or to rovers on another planet. Could a hacker compromise the communication system and send malicious commands to the rover?

There was some discussion of that possibility on a question-and-answer forum in 2012, after Curiosity made it to Mars. One of the participants, a command controller at a ground station, said a hack would be extremely difficult, given the security protocols in place, and also that an attacker would have to compromise NASA’s highly secured “red network.” Communication over that distance would also require an antenna on the scale of the Deep Space Network antennas the U.S. deploys around the world.

He said it might be possible to “forge communication” with a space vehicle or launch a so-called “man-in-the-middle” (MITM) attack between ground control and the vehicle, “but in either of these cases, the payoff would likely not be worth the reward,” unless it was a nation state looking to steal technology.

Croall said he views the risk of hacking communication with a spacecraft as one of those scenarios that’s possible in theory but highly unlikely. “If you build the system well, it’s less likely to happen,” he said. “Many MITM attacks happen because the system was designed for convenience, and they don’t need to design an uplink to a NASA rover for simplicity and user experience.”

And if the software developers running the mission did discover a bug or other software defect, they could send it a patch or update.

That happened in 2012 with Curiosity because the rover needed different software to drive around on the planet’s surface than it did for the landing and didn’t have enough random access memory (RAM) for both.

The computers running it had only 120 megabytes of RAM. Compare that to most laptops today that have at least 8 gigabytes, or 65 times as much.

As an article about it at the time put it, “it’s a great computer if you’re living in 1995. But it’s built to withstand wild temperature swings, radiation, and physical shaking.”

Croall makes the same point. “The hardware used in these things is usually special, hardened for exposure to all kinds of things including radiation. And when you’re building a closed system like this, that nobody else ever has to interact with, it is easier to build security in. Many security weaknesses come from compromise for user convenience.”

So don’t expect a ticker tape parade or the Medal of Freedom for the Perseverance software. But keep in mind that the rover wouldn’t be on Mars without it.

--

--

Taylor Armerding

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.