FLIGHT of the SOFTWARE BUGS!



In Fight Computer Bugs

All software can, as does, go wrong and with aircraft containing even more of it, can we really be sure they will fly safely?


By: Paul Marks


C OCKPIT DISPLAYS PLUNGED INTO DARKNESS,---- ENGINES THAT THROTTLE BACK DURING TAKE~OFF ----AND THEN CONTRADICTORY AIRSPEED READINGS-----ARE JUST SOME OF THE PROBLEMS CAUSED IN RECENT YEARS BY INEXPLICABLE FAILURES IN THE SOFTWARE THAT CONTROLS AIRCRAFT.


SO FAR THERE ARE NO KNOWN CASES OF SUCH

 FAILURES ALONE CAUSING AN ACCIDENT


Speculation that software problems led to the crash landing of the British Airways Boeing 777 at Heathrow airport, London, on 17 January 2008, remain unconfirmed. While software was implicated in the Korean Air jumbo jet crash in August 1997 on Guam, which killed 228 people, human error, not software design, was to blame. Software failures remain a risk, though, and with aircraft makers set to increase the proportion of aircraft functions controlled by software, experts are warning that they will become more frequent, increasing the chance that one will cause an accident.


In early aircraft, moving parts such as the rudder and wing flaps were linked to controls in the cockpit either by a system of cables and pulleys or by hydraulics. In the 1970s, aircraft makers realized they could get rid of much of this very heavy equipment and replace it with lightweight wiring and systems driven by electric motors in the wings and tail. Such “fly-by-wire’ (FBW) systems vastly increased fuel efficiency.


FBW had another advantage too: because it uses electrical signals, for the first time it allowed a computer to be placed between the pilot and the moving parts. The computers were programmed to modify the pilots’ instructions in certain instances: for example, to stop them moving the rudder too far or too quickly and so damage the plane. They also allowed the plane’s aerodynamics to be finely adjusted during flight in response to wind conditions, further improving fuel economy.


But the addition of software led to different problems. Some of these are documented in a report completed last year by the US National Academy of Sciences (NAS) . It lists a number of instances in which software bugs caused frightening problems during flight. In one instance in August 2005, a computer in a Boeing 777 presented the pilot with contradictory reports of airspeed. It said the aircraft was going so fast it could be torn apart and at the same time that the plane was flying so slowly it would fail to generate enough lift to stay in the air. The pilots managed to control the aircraft nonetheless, but it was a stark illustration of what can go wrong.


In another instance in 2005, this time in an Airbus 319, the pilots’ computerised flight and navigation displays as well as the autopilot, auto-throttle and radio all lost power simultaneously for 2 minutes. Another time, what the NAS calls “faulty logic in the software” meant that when the       computer controlling fuel flow   failed in an Airbus A34o, the back-up systems were not turned on.


Now Boeing is planning to get rid of the hydraulic wheel brakes on its 787 in favour of lighter electrically actuated ones and to shift from using pneumatic engine starters and wing de-icers to electrical ones. Airbus will also be adopting a “more electric” approach in its forthcoming A350, says the plane’s marketing director. “The addition of more electric systems will mean even more computer control,” says Martyn Thomas, a systems engineering consultant based in Bath, UK, and a member of the NAS panel that produced the software report last year.” It will mean more wires or shared data lines and so still more possibilities for errors to arise.


Why do software bugs arise and why can’t they be removed? Bugs are sections of code that start doing something different to what the programmer intended, usually when the code has to deal with circumstances the programmer didn’t anticipate. All software is susceptible to bugs, so it must be tested under as many very different circumstances as possible. Ideally, the bugs get discovered at this time and are removed before the software is actually used. This is very difficult in complex systems like aircraft because the number of possible scenarios —such as different combinations of air densities, engine temperatures and specific aerodynamics — is huge.


To test for bugs, most aircraft manufacturers use a set of guidelines called the DO-78 B standard, which was created by the US-based Radio Technical Commission for Aeronautics, a collection of government, industry and academic organizations, and the European Organization for Civil Aviation Equipment. Recognized by the US Federal Aviation Administration (FAA), the standard rates software on how seriously it would compromise safety were it to fail, and then recommends different levels of testing depending on that rating.


The most rigorous “level A” test, reserved for software whose failure would cause a catastrophic event, is called ‘modified condition/decision coverage” (MCDC), and it places the software in as many situations as possible to see if it crashes or produces anomalous output.


But it isn’t clear whether the MCDC test includes enough different conditions to provide any greater protection than the level B tests done on less safety-critical software. So in 2003, UK Ministry of Defense contractor Qinetiq ran both levels on a range of software that is deployed in military transport aircraft. The MCDC should pick out many more flaws than the level B tests, but the Qinetiq team found that there was “no significant difference” between them.


“MCDC testing is not removing any significant numbers of bugs,” says Thomas. “It highlights the fact that testing is a completely hopeless way of showing that software does not contain errors.” “The criteria currently used to evaluate the dependability of electronic systems for many safety-related uses are way too weak, way insufficient,” says Peter Ladkin, a computer scientist specialising in safety engineering at the University of Bielefeld in Germany.


Instead of focusing on testing, Ladkin and Thomas want to see a change in the way safety-critical software is written. Neither Boeing nor Airbus responded to questions about exactly which programming languages their software systems are written in, but according to Les Dorr, spokesman for the FAA, which certifies US commercial software systems, it is a mixture of the languages C, C++, assembler and Ada, which was developed by the Pentagon. Some of those languages, such as C, allow programmers to write vague or ambiguous code, says Thomas, which is the kind of thing that often leads to bugs.



To solve this problem, he suggests using specialised computer languages that can be verified as the programmer is coding. Such languages include B, developed at Bell Labs, and SPARK, a version of Ada. These languages and their compiler software have strict controls within them, so it is very difficult for programmers to write vague or ambiguous code.


The NAS report also backs stricter controls on languages. “Safe programming languages... are likely to reduce the cost and difficulty of producing dependable software,” it says.


The FAA agrees that an increase in the software control of planes “makes validation and verification of software more challenging” and is working to ensure that validation keeps pace with technical advances. But Thomas says their progress is too slow . “How long are we prepared to go on using tools we know are broken to develop software on which people’s lives depend? No other engineering discipline would rely on tools that have dangerous faults.”


SOURCE:

NEW SCIENTIST Magazine

9 February, 2008 (Pgs. 28-29)

                    www.newscientist.com



bar_blbk.jpg - 5566 Bytes

Return to the words of wisdom, the flight index..

Return to the words of wisdom index..

Return to the main menu..

D.U.O Project
Church of the Science of God
La Jolla, California 92038-3131
(858)220-1604

Church of the Science of GOD, 1993
Web Designed by WebDiva