When I received my Ph.D. in Particle Physics in 1987, I was qualified as among the wonkiest elite in science. If I had been concerned with proving that I was smart, I might have stayed in physics, but the expectations for practical applications of fundamental physics had eroded greatly after my freshman year. I wanted the larger world to benefit from the work that I did, so I took a job at a national laboratory. After a brief post-doc in fundamental physics, I moved over to environmental science. Throughout, the growing importance of computerized control and simulation meant that I enjoyed a distinct competitive advantage over my peers, as I had learned to program from one of the foremost practitioners in his generation – my father. When I became a full-time software developer, my background in physics allowed me to collaborate with engineers, to the extent that I would be brought in on engineering meetings when my peers were unavailable.
Now this may seem like just bragging, but the point is that my career has been a dynamically evolving mash-up of science, engineering and programming. My experience was filtered through a practice of systems analysis that led me to examine and control the way that those disciplines interact. So when I think about science, I don’t think about it as “what scientists do.” I do consider myself a scientist, but I do engineering and programming as well, and I perceive the three disciplines as very different activities.
I took a course on philosophy of science as an undergraduate, and I won’t drag you, dear reader, through all the definitions that have been offered. Most of them hold that Frances Bacon’s articulation of the scientific process was a magic portal for the human intellect, as though practical efficacy and the rational ordering of knowledge had not been recognized virtues among the ancients. This leads many philosophers of science to be overly concerned with truth, when what is really of interest to us as people is what has yet to be true.
The power of science is in allowing us to pierce the shadowy veil of possibility. In biology, understanding of the variety of living things and their mutual dependencies gives us the power to sustain agriculture, breed robust animals, and improve our health. Chemistry empowers us to predict the stability and properties of new substances. And physics probes the fundamental mechanisms that determine both the stability of the world around us and our ability to manipulate it.
So science provides us with pure knowledge, unconstrained by our desires or intentions. It therefore tends to attract people that are driven by curiosity. That may sound like a trivial thing, but to find order in the chaotic milieu of nature is a source of great hope. Calendars that predict the seasons allowed agricultural societies to improve their harvests and so avoid famine. The germ theory of disease motivated doctors to wash their hands, transforming hospitals from centers of disease incubation to places of healing. Scientific curiosity – to ask neurotically “why?” – is the source of great power over the world.
That power is visible in the manufactured reality all around us: the houses, roads, dams and microchips. None of these things would have existed in the natural world. The artifacts exist only because people have a purpose for them. That purpose may be as simple as cooking dinner for our children, or as grand as ensuring that the world’s knowledge is available through the internet to any person, anywhere, any time. Which of our goals are realized is largely a matter of economics: are enough people invested in the outcome that they are willing to pay to see it accomplished? We don’t have to have a kitchen in every home, but few of us can afford to go out to dinner every night, so we pay for a kitchen. The cost and delay of moving information via mail drove the growth of the internet, at an expense that I would imagine (I can’t find numbers online) has run into trillions of dollars.
Now when people invest a substantial sum of money, they want some assurance that they’ll get what they’re paying for. Appreciating that gold does not tarnish, the sultan seeking to protect the beauty of his marble dome does not want to be told, “All natural gold originates in supernovae.” Or, worse, “If we smash heavy elements together in an accelerator, we can produce ten gold atoms a day.” Those kinds of answers are acceptable in scientific circles, but they are not acceptable in the engineering world. In the engineering world, when somebody comes to you with money and a problem, your job is to design an implementation that will realize their goal.
Since we’re a species of Jones-chasers, most of the time the engineer’s job is fairly simple. People come wanting something that they’ve seen, and the challenge is to understand how it was done before and adapting the design to local conditions. But every now and then somebody comes in to ask for something completely novel. They want to build an elevator to space, for example, or create a light source that doesn’t produce soot. The engineer has no way of knowing whether such things are possible, except by reference to science.
It is into the gap between the formless knowledge of science and the concrete specifications of engineering that programming falls. Considering the light bulb, the scientists know that heated objects glow, but also burn. Applying an electric voltage to a poor conductor causes it to heat as current flows through it. The filament will burn when exposed to oxygen, so we need to isolate it from air. Using an opaque material as the barrier will also trap the generated light. However, some solids (such as glass) are transparent, and air permeates slowly through them.
The illustration is a cause-and-effect analysis. It looks at the desirable and undesirable outcomes of various scientific effects, attempting to eliminate the latter while preserving the former. The cause-and-effect analysis leads to an operational hypothesis: if we embed a wire in a glass bulb and apply a voltage, the wire will heat and emit light. This is not an engineering specification, because we don’t know how much the light bulb will cost, or how much light it will emit. But it also isn’t science, because the operational hypothesis is not known to be true. There may be no filament material that will glow brightly enough, or the required voltage may be so high that the source sparks down, or the glass may melt. But without the operational hypothesis, which I have called a “program,” engineering cannot begin.
We examined the challenge of software engineering in the first post in this series, focusing on the rapid development in the field and the difficulty in translating customer needs into terms that can be interpreted by digital processors. Today, we have arrived at a more subtle point: the algorithms written in our programming languages process information to produce information. The inputs for this process arise from nature and humans and increasingly other machines. Those inputs change constantly. Therefore very few programs (except maybe that for space probes) are deployed into predictable environments. That includes the hardware that runs the program – it may be Atom or Intel or AMD, and so the performance of software is not known a priori. For all of these reasons, every piece of software is thus simply an operational hypothesis. It is a program, not a product.