Back to table of contentsMargaret Hamilton working on the Apollo flight software. Credit.
Computers haven't been around for long. If you read one of the many histories of computing and information, such as James Gleick's The Information, or Jonathan Grudin's History of HCI, you'll learn that before digital computers, computers were people, calculating things manually, as portrayed in the film Hidden Figures (watch it if you haven't!). And that after digital computers, programming wasn't something that many people did. It was reserved for whoever had access to the mainframe and they wrote their programs on punchcards like the one above. Computing was in no way a ubiquitous, democratized activity—it was reserved for the few that could afford and maintain a room-sized machine.
Because programming required such painstaking planning in machine code and computers were slow, most programs were not that complex. Their value was in calculating things faster than a person could do by hand, which meant thousands of calculations in a minute rather than one calculation in a minute. Computer programmers were not solving problems that had no solutions; they were translating existing solutions (for example, a quadratic formula) into the notation a computer understood. Their power wasn't in creating new realities or facilitating new tasks, it was accelerating old tasks.
The birth of software engineering, therefore, did not come until programmers started solving problems that didn't have existing solutions, or were new ideas entirely. Most of these were done in academic contexts to develop things like basic operating systems and methods of input and output. These were complex projects, but as research, they didn't need to scale; they just needed to work. It wasn't until the late 1960s when the first truly large software projects were attempted commercially, and software had to actually perform.
The IBM 360 operating system was one of the first big projects of this kind. Suddenly, there were multiple people working on multiple components, all which interacted with one another. Each part of the program needed to coordinate with the others, which usually meant that each part's authors needed to coordinate, and the term software engineering was born. Programmers and academics from around the world, especially those who were working on big projects, created conferences so they could meet and discuss their challenges. In the first software engineering conference in 1968, attendees speculated about why projects were shipping late, why they were over budget, and what they could do about it.
At the time, one of the key people behind coining the phrase software engineering was Margaret Hamilton, a computer scientist who was Director of the Software Engineering Division of the MIT Instrumentation Laboratory. One of the lab's key projects in the late 1960's was developing the on-board flight software for the Apollo space program. Hamilton led the development of error detection and recovery, the information displays, the lunar lander, and many other critical components, while managing a team of other computer scientists who helped. It was as part of this project that many of the central problems in software engineering began to emerge, including verification of code, coordination of teams, and managing versions. This led to one of her passions, which was giving software legitimacy as a form of engineering— at the time, it was viewed as routine, uninteresting, and simple work. Her leadership in the field established the field as a core part of systems engineering.
The first conference, the IBM 360 project, and Hamilton's experiences on the Apollo mission identified many problems that had no clear solutions:
Other questions, particularly those concerning the human aspects of software engineering, have been hopelessly difficult to understand and improve. One of the seminal books on these issues was Fred P. Brooks, Jr.'s The Mythical Man Month. In it, he presented hundreds of claims about software engineering. For example, he hypothesized that adding more programmers to a project would actually make productivity worse at some level, not better, because knowledge sharing would be an immense but necessary burden. He also claimed that the first implementation of a solution is usually terrible and should be treated like a prototype: used for learning and then discarded. These and other claims have been the foundation of decades of years of research, all in search of some deeper answer to the questions above.
Other social aspects of software engineering have received considerably less treatment. For example, despite the central role of women in programming the first digital computers, and the central role of women like Margaret Hamilton and Grace Hopper leading the formation of software engineering as a field in research and government, these histories are often forgotten, erased, and overshadowed by the gradual shift from software development being a field dominated by women to a field dominated by men. Many texts are beginning to document the central role of sexism that was at the heart of causing this culture shift (e.g., Abbate 2012). These histories show that, just like any other human activity, there are strong cultural forces that shape how people engineer software together.
If we step even further beyond software engineering as an activity and think more broadly about the role that software is playing in society today, there are also other, newer questions that we've only begun to answer. If every part of society now runs on code, what responsibility do software engineers have to ensure that code is right? What responsibility do software engineers have to avoid algorithmic bias? If our cars are to soon drive us around, who's responsible for the first death: the car, the driver, or the software engineers who built it, or the company that sold it? These ethical questions are in some ways the future of software engineering, likely to shape its regulatory context, its processes, and its responsibilities.
There are also economic roles that software plays in society that it didn't before. Around the world, software is a major source of job growth, but also a major source of automation, eliminating jobs that people used to do. These larger forces that software is playing on the world demand that software engineers have a stronger understanding of the roles that software plays in society, as the decisions that engineers make can have profoundly impactful unintended consequences.
We're nowhere close to having deep answers about these questions, neither the old ones or the new ones. We know a lot about programming languages and a lot about testing. These are areas amenable to automation and so computer science has rapidly improved and accelerated these parts of software engineering. The rest of it, as we shall see in this, has not made much progress. In this class, we'll discuss what we know and the much larger space of what we don't.
Abbate, Janet (2012). Recoding Gender: Women's Changing Participation in Computing. The MIT Press.
Brooks Jr, F. P. (1995). The Mythical Man-Month (anniversary ed.). Chicago
Gleick, James (2011). The Information: A History, A Theory, A Flood. Pantheon Books.
Grudin, Jonathan (2017). From Tool to Partner: The Evolution of Human-Computer Interaction.
Kay, A. C. (1996, January). The early history of Smalltalk. In History of programming languages---II (pp. 511-598). ACM.
Ko, A. J. (2016). Interview with Andrew Ko on Software Engineering Daily about Software Engineering Research and Practice.
McCarthy, J. (1978, June). History of LISP. In History of programming languages I (pp. 173-185). ACM.
Metcalf, M. (2002, December). History of Fortran. In ACM SIGPLAN Fortran Forum (Vol. 21, No. 3, pp. 19-20). ACM.
Stroustrup, B. (1996, January). A history of C++: 1979--1991. In History of programming languages---II (pp. 699-769). ACM.