The Mythology of Programming Language Ideas

Tomas Patricek offers a stimulating analysis of program language design in the framework of science as a practice. As tools advance, later generations often deride their predecessors as “unscientific,” seeing their theories as myth. This is a point that I have advanced in defense of ancient philosophers and theologians: they were thinking rigorously within the limitations of the evidence that they could perceive. More, their thinking encompassed types of experience (what we call “spiritual”) that modern scientists, trapped in materialism, fail to honor.

Patricek is particularly interested in the evolution of programming languages, which are subject to rigorous scientific analysis both as regards expressiveness and efficiency. My comment to him:


I greatly enjoyed your article. I do have one specific vision regarding the future: programming language design is about bridging the mismatch between the digital and organic perceptions of reality. For much of the history of programming languages, the burden was on the organic participants to conform to the limitations of digital devices. That boundary is shifting rapidly to allow digital devices to interpret utterances of non-programmers.

Within any one paradigm for adaption between the two domains of perception, “developers” (which may include the general public) are not really involved in science as  a search for first principles that constrain possibilities. Rather, they are exploring and evolving an ecosystem. An analogy is the human genome which can be understood – but probably not justified in scientific terms (missing initial conditions), nor optimized in engineering terms (due to complex functional dependencies).

Identity Crisis

So I’ve been refreshing my Java skills, working through Deitel and Deitel’s “Java Standard Edition 8” training material. The first seven chapters have been pretty easy going, but I’ve been doing the usual – blowing out the simple coding examples so that they actually model the real world.

For example, when simulating shuffling a deck of cards, the sample code simply takes the entire deck from top to bottom, and swaps the next card with a random one below it. Of course this violates the way that a real shuffle works. In a real shuffle, the cards at the top of the two stacks of the cut end up closer to the top. So I wrote a random shuffle algorithm that simulates the cut, and merges the two by taking cards randomly from each stack until one is exhausted.

The next assignment is to capture some statistics on a set of test scores. It’s a pretty simply problem: minimum and maximum values and the average. But you know where that goes: at the end of the term, the scores for all the assignments have to be rolled up into some final grade. This seemed like an interesting problem – coming up with some general mechanism for aggregating scores into a final grade.

We all know how terms start: the teacher hands out a syllabus with a weighting for each element of the course work: homework, quizzes, mid-terms, papers and finals are typical elements. Each element is given an expected weighting to the final grade.

Of course, it never works out that way. Some midterms are harder than others, but each should contribute the same weight to the final grade. This is sometimes accomplished by weighting the test scores so that the averages are the same. And what if the students move through the material faster or slower than in prior years? Might they not complete more or less assignments than expected?

So this simple little fifty-line program became a ten module monster. I can’t entirely blame my son Gregory for the damage done by my interview with him on grading policies at the JC he’s attending. But he did bring up a really interesting point: nobody but the professor knows the actual assignment scores. She produces a final letter grade, and that’s all that the records office knows.

We were trying to decide how to model this, and came up with the idea of the professor having a grade book with a private set of identifiers that link back to the student records held by the registrar. After each assignment is graded, the instructor looks up the grade book ID for the student, and adds the grade to the book against that ID. At the end of the term, the professor combines the scores to produce a class curve, and assigns a letter grade for each interval in the distribution. In the end, then, no student knows how close they were to making the cut on the next letter grade, so nobody knows whether or not they have a right to appeal the final grade.

In my code model, therefore, I have two kinds of people: students and instructors. Now we normally identify people by their names – every time you fill out a form, that information goes on it. But sometimes names change.

In the grade book, of course, we also want identities to remain anonymous. We need mechanisms to make sure that IDs are difficult to trace back to the person being described. The NSA did this with records subpoenaed from the phone carriers – though nobody was convinced that the NSA wasn’t bypassing the restrictions that were supposed to prevent names from being linked to the phone calls until a warrant was obtained from a court. In the case of my simple gradebook model, it’s accomplished by making the class roster private to the “Instructor” class.

This all got me to thinking about how slippery “identity” is as a concept. It can be anything from the random number chose by the instructor to a birth certificate identifier to a social security number to a residence. All of these things provide some definite information about a person, information that can be used to build a picture of their life. Some of it is persistent: the birth certificate number. Other identities may change: the social convention is that a woman changes her name when she marries. And in today’s mobile world, we all change residences frequently. A surprising change in my lifetime has been that my phone number doesn’t change when I change residence, and the phone number is a private number, where once it was shared with seven people.

So as I was modelling the grade book, I found myself creating an “Instructor” class and a “Student” class, and adding a surname and given name to both. I hate it when this happens, and in the past I would have created a “Person” that would capture that information, and make “Student” and “Instructor” sub-classes of Person. But that always fails, of course, as what happens when an instructor wants to sign up for an adult education class?

And so I hit upon this: what if we thought of all of these pieces of identifying information as various forms of an “Identity”? Then the instructor and student records each link to the identity which could be a “Personal Name.” That association of “Personal Name” with “Instructor” or “Student” reflects a temporary role for the person represented by the identity. That role may be temporary, which means that we need to keep a start and end date for each role. And the role itself may be identifying information – certainly a student ID is valid to get discount passes at the theater, for example.

The subtlety is that addresses and old phone numbers are reassigned to other people every now and then. The latter was a frequent hassle for people that got the phone number last held by a defunct pizza take-out. And it’s even worse for the family living right in the middle of America, which is the default address for every internet server that can’t be traced to a definite location. The unfortunate household gets all kinds of writs for fraud committed by anonymous computer hackers.

But I really wish that I had a tool that had allowed me to maintain a database with all of this information in it. I don’t think that I can reconstruct my personal history at this point. As it is, what I have in my personal records is my current identity: my credit card numbers (which BofA fraud detection keeps on replacing), my current address and phone number, my current place of employment. That is all that the computer knows how to keep.

With the upshot that I know far less about myself than the credit agencies do.

CLR’ing Away the .NET

When Microsoft first began offering “component” technology to the world, it was a result of unmanaged programming. I don’t mean “unmanaged” in the technical sense (more on that later). I mean “unmanaged” in the original sense: after smashing its office suite competition by tying a full-featured release of Office to Windows 3.1, Microsoft found its development teams competing with each other. When Word users asked for limited spread-sheet capabilities, the Word developers began recreating Excel. When Outlook users asked for sophisticated text formatting for the e-mails, the development team began recreating Word.

Now this reflects two things. The first and most significant is that the code base created by the application teams was not modular. The Word team should have been able to lift the basic spread-sheet engine right out of the Excel code base. The second was that Microsoft had a schizophrenic developer base. Visual Basic developers enjoyed the features of an interpreted language that allowed them to modify code during execution, while Visual C++ developers enjoyed the benefits of high-speed execution. Unfortunately, those two worlds didn’t talk well to each other. C++ uses ‘0’ to locate the first item in a list, while VB uses ‘1.’

Microsoft’s attempt to bridge these gulfs was their Component Object Model. As a response to the duplication of code, it was a little heavy: COM enabled users to add an Excel spreadsheet in Word, but at the cost of starting a full instance of Excel in the background, doubling the memory requirements. By contrast, I saw a demonstration of IBM SOM Objects at a software engineering conference in 1997 that brought a spreadsheet into a text editor while adding only 3% to the memory burden.

At that conference, IBM touted a world in which a graduate student could write add-ins to popular business applications. This was obviously not in the interests of Microsoft, whose dominance of the office application market fueled its profits. This was evident in their implementation of COM. When adding a new component to the operating system, the component registers its services (in the “Windows Registry,” of course). Microsoft published its Office services so that developers of enterprise applications could automatically create Word documents and Excel spreadsheets. That should have meant that other teams could create alternative implementations of those interfaces. To hobble that strategy, Microsoft did not include a reverse lookup capability in its registry. In other words, if you wanted to let your user pick which dictionary to use for a spell-checker, there was no way to find out which installed components provided a “Dictionary” service. You had to walk the entire registry and ask each component in turn whether it was a “Dictionary.” This was not a cheap operation: when I tried it in 1998, it took almost a minute.

On top of this, Microsoft biased its component technology to the VB developer, assuming that C++ developers were sophisticated enough to work around the inconsistencies. This was not an minor burden. What took three lines of code in VB could take a page in C++.

However, COM and its successor DCOM were the latest shiny toy, and many C++ developers flocked to the technology, even for pure C++ implementations. I was scandalized, because C++ had its own methods for creating reusable modules, methods that lay at the foundations of COM and DCOM underneath the cruft of type conversions and named method invocations. I finally found an article on MSDN that warned that COM and DCOM should only be used for systems that were configured dynamically. This included, famously, Visual Basic, host to a rich market of third-party user interface controls (known as “ActiveX” controls). But Microsoft’s advice was not heeded by the industry, and even today I am repackaging COM components as dynamically loaded libraries (DLLs) that publish native C++ objects.

I must admit that over time the work demanded of the C++ developer has moderated. Visual Studio can generate C++ interfaces using a COM “type library,” and allows users to decorate a class declaration with symbols that allow tools to automatically generate the COM wrappers that publish code to VB.

Unfortunately, the field tilted against the C++ developer when Microsoft introduced its .NET technology. One of the major charges leveled against C++ over the years is that developers need to explicit manage the resources consumed by their programs. Memory in particular is a bugaboo, and one of the major challenges of writing a complex C++ application is ensuring that memory doesn’t “leak.” This frustration was catered to by the creators of Java and other “managed” languages (including Microsoft’s C#). Unfortunately, it encourages the fiction that memory is the only resource that developers need to manage, a fiction that is addressed explicitly in the documentation of Java and C# class libraries that open network connections or access external components such as databases.

Be that as it may, Microsoft had to decide whether to continue to support new technologies, such as HTML 5 and XML, upon the fundamental foundations of the machine architecture, or within the higher-level abstractions of the managed world. The overwhelming popularity of managed languages drove the choice. Microsoft no longer delivers full-featured libraries for C++ developers. For a long time, they could only access those features through the clumsy methods of COM programming.

This came to a head for my team last year when trying to implement a new feature that required parsing of XML files. A demonstration application was quickly written in C#, but as the effort to access that from our C++ code was prohibitive, we went looking for a third-party XML library. We couldn’t find one that did the job.

The lack of support for C++ libraries has created some distressing contradictions. C++ developers have always been proud to claim that code written in C++ runs twice as fast as code written in a managed language. Recent studies reveal, however, that processing a large file, such as sensor data produced by a networked device or log files from a cloud applications, is dominated by the time it takes to read the file. The C++ libraries appear to take twice as long to read the file as the C# libraries.

Driven by this evidence to seek new methods for using .NET libraries in C++ code, I finally came upon Microsoft’s C++/CLI or CLR technology. In CLR, the developer has direct control over whether his objects are managed or unmanaged. This means that the speed of C++ execution can be leveraged when necessary, while also allowing access to the rich .NET libraries maintained by Microsoft. Originally CLR was advanced as a technology for migrating C++ applications to the .NET libraries, but it turned out that there were too many inconsistencies between the run-time environment established for native C++ applications and .NET applications.

But what about those of use with a million lines of code that runs within the native C++ execution environment? Is there no bridge?

I am glad to report that we have found it. It is to create a CLR library that exports unmanaged objects using the classic DLL methods that COM originally supplanted. The unmanaged objects wrap managed .NET components, and use C++/CLI methods to convert between C++ and .NET data types.

I am certain that there are some constraints on this strategy, particularly when trying to integrate .NET and C++ components in user interfaces or attempting to ensure data consistency during simultaneous execution in both environments. But for simple operations that drop into the managed world temporarily, it seems to work just fine.

And I find some joy in coming full circle, with only a few lines of code being able once again to write code as a C++ developer should, rather than as a second-class citizen in a market targeting to developers solving far simpler problems than I confront every day.

Up in the Cloud

Information Systems, the discipline of organizing computers and software resources to facilitate decision-making and collaboration, is undergoing a revolution. The opportunity is allowed by cheap data storage and high-speed networking. The necessity is driven by the unpredictability of demand and the threat of getting hacked. These factors have driven the construction of huge data and compute centers that allow users to focus on business solutions rather than the details of managing and protecting their data.

As a developer, this proposition is really attractive to me. I’m building a sensor network at home, and I’d like to capture the data without running a server full time. I’d also like to be able to draw upon back-end services such as web or database servers without having to install and maintain software that is designed for far more sophisticated operations.

The fundamental proposition of the cloud is to create an infrastructure that allows we as consumers to pay only for the data and software that we actually use. In concept, it’s similar to the shift from cooking on a wood-fired stove fed by the trees on our lot to cooking on an electric range. Once we shift to electricity, if we decide to open a restaurant, we don’t have to plan ahead ten years to be certain that we have enough wood, we just pay for more electricity. Similarly, if I want to develop a new solution for home heating control, I shouldn’t have to pay a huge amount of money for software licenses and computer hardware up front – that should be borne by the end-users. And, just as a chef probably doesn’t want to learn a lot about forestry, so I shouldn’t have to become an expert in administration of operating systems, databases and web servers. Cloud services promise to relieve me of that worry.

It was in part to assess the reality of that promise that I spent the last two days at Microsoft’s Cloud Road Show in Los Angeles. What I learned was that, while they pursue the large corporate customers, Microsoft is still a technology-driven company, and so they want to hear that they are also helping individual developers succeed.

But there were several amusing disconnects.

Satya Nadella took the helm at Microsoft following Steve Balmer’s debacles with Windows 8 and Nokia. Balmer was pursuing Apple’s vision of constructing a completely closed ecosystem of consumer devices and software. Nadella, head of the Azure cloud services effort, blew the top off of that plan, declaring that Microsoft would deliver solutions on any hardware and operating system that defined a viable market. Perversely, what I learned at the roadshow was that Microsoft is still very much committed to hardware, but not the kind of hardware you can carry on your person. Rather, it’s football fields stacked three-high with shipping containers full of server blades and disk drives, each facility drawing the power consumed by a small city. None of the containers belongs to a specific customer (actually the promise is that your data will be replicated across multiple containers). They are provisioned for aggregate demand of an entire region, running everything from a WordPress blog to global photo-sharing services such as Pinterest.

This scale drives Microsoft to pursue enterprise customers. This is a threat to established interests – large data centers are not an exportable resource, and so provide a secure and lucrative source of employment for their administrators. But that security comes with the pressure of being a bottleneck in the realization of others’ ambitions and a paranoid mind-set necessary to avoid becoming the latest major data-breach headline. The pitch made at the roadshow was that outsourcing those concerns to Microsoft should liberate IT professionals to solve business problems using the operations analysis software offered with the Azure platform.

To someone entering this magical realm, however, the possibilities are dizzying. At a session on business analytics, when asked what analysis package would be best to use for those looking to build custom algorithms, the response was “whatever tool your people are familiar with.” This might include R (preferred by statistics professionals) or Python (computer science graduates) or SQL (database developers). For someone looking to get established, that answer isn’t comforting.

But it reveals something else: Microsoft is no longer in the business of promoting a champion – they are confident that they have built the best tools in the world (Visual Studio, Office, Share Point, etc.). Their goal is to facilitate delivery of ideas to end customers. Microsoft also understands that means long-term maintenance of tightly coupled ecosystems where introduction of a malfunctioning algorithm can cost tens of millions of dollars, and viruses billions.

But what about the little guy? I raised this point in private after a number of sessions. My vision of the cloud is seeded by my sons’ experience in hacker communities, replete with “how-to” videos and open-source software modules. I see this as the great hope for the future of American innovation. If a living space designer in Idaho can source production of a table to a shop in Kentucky with a solid guarantee of supply and pricing comparable to mass-produced models, then we enter a world in which furniture showrooms are a thing of the past, and every person lives in a space designed for their specific needs. As a consumer, the time and money that once would have been spent driving around showrooms and buying high-end furniture is invested instead in a relationship with our designer (or meal planner, or social secretary).

Or how about a “name-your-price” tool for home budgeting? If you’ve got eighty dollars to spend on electricity this July, what should your thermostat setting be? How many loads of laundry can you run? How much TV can you watch? What would be the impact of switching from packaged meals to home-cooked? Can I pre-order the ingredients from the store? Allocate pickup and preparation time to my calendar?

Development of these kinds of solutions is not necessarily approachable at this time. The low-end service on Azure runs about $200 a month. From discussion, it appears that this is just about enough to run a Boy Scout Troop’s activity scheduling service. But I am certain that will change. Microsoft responded to the open-source “threat” by offering development tools and services for free to small teams. Their Azure IoT program allows one sensor to connect for free, with binary data storage at less than twenty dollars a month.

At breakfast on Wednesday, I shared some of these thoughts with a Microsoft solutions analyst focused on the entertainment industry. I ended the conversation with the admission that I had put on my “starry-eyed philosopher” personality. He smiled and replied “You’ve given me a lot to think about.” It was nice to spend some time with people that appreciate that.

Father, Finally

My father is in the final stages of his journey here. For the last month, he has been surrendering to the prostate cancer that is invading his bones. His principal fear has been of being a burden to my mother, and so he has methodically tried to further the process. The degradation of his sense of taste is facilitating his resolve. It is clear that his extremities are being consumed in the effort to maintain the operation of his heart, lungs and brain.

I could mourn the loss of his brilliant intellect, but that intellect was a mixed blessing to his intimates. It was a very powerful tool that supported convictions that could lead to harsh judgments. What I am finding instead is that as he weakens and submits to confusion, for the first time in my life I am able to proffer simple acts of tenderness. Stroking his head, rubbing his chest over his heart, holding his hand: these have been rewarded by looks of wonder.

I was caught up, for much of my life, in my father’s ambitions for programming. On the title bar, the “Programming” link offers entries that introduce his philosophy of design. It is my own formulation: my father adopted obscure terminology to ensure precision of meaning, and believed that practice under his tutelage was essential to competence. In fact, inspired by Hesse’s “The Glass Bead Game”, his vision of a training center was a monastery. Having grown up with Diagrammatic Programming, when I joined him in the family business in 1995, I rapidly began to innovate. He found this intolerable, and when I finally had the opportunity to articulate my logic to him, his retort was “Well, it’s clear that if you talk long enough, Brian, you could convince people of anything.”

My mother dreaded our conversations. Even as recently as a few months ago, she would retreat into her office when I came by to visit him. I recognized the dynamic that evolved between us, but also saw that the problem was far more complex than just our personal history. During a transfer to the residents of ownership of the mobile home park property, my father fought a tremendous legal and spiritual battle with the lawyers seeking to maximize the developer’s profits at the cost of displacing old friends. My father eventually shared that the lead lawyer was ticketed on a DC10 that crashed when the cabin door popped open in flight, but chose at the last minute not to board. (Yes, a textbook case of misdirected anger.) I had my own struggle with the family law community that cultivated fear on the 7th floor of the Van Nuys court house. After one conversation with my father, I heard the thoughts of one of them admitting of me, “He’s far stronger than we’ve given him credit for.” Eventually I used my father to send a message back: “I’ve done what I’ve done in order that it couldn’t be said that people weren’t given a chance to do the right thing.”

In spite of his spiritual capacities, my father always pooh-poohed my own experiences. I received several clues as to his motivations over the years. Having suffered the traumatic losses of John and Robert Kennedy and Martin Luther King Jr., he observed once that “All the good people get killed.” Although he was bailed out of the financial consequences of his own ambitions by an inheritance from a distant aunt, he worried about my financial insecurity, and may have considered wasteful my itinerant attendance at churches throughout the Conejo Valley.

But there was a deeper aspect to the problem that became clear only in 2008 when I went out to the Netherlands on a business trip. As I stepped to the visa counter in Amsterdam, I caught the thought “Well, [the Americans] are finally producing real people.” I immediately entered a warm and open relationship with the engineers we had come to visit, and a couple of nights into the trip, I woke up to them poking around in my mind. They found my father, and showed me behind him the tomb of an ancient Germanic king, still struggling to retain control of his line.

My father never had a father. Grandfather Balke left my grandmother, at the time a professional ballet dancer and later an anesthesiologist, after my father was born. From my father’s response to my physical affection, I came to see that the lack of a father was the wound that his antagonists, both ancient and modern, used to attempt to control him and his children.

That realization brought me back to a day when, returning to work after lunch, I waited at a stop light outside the executive suites rented by my brother. The usual argument over priorities was raging in my head. Suddenly, a wave of energy moved through my mind from left to right. Both the stop light and the radio in my car went dead at the same instant, and a woman’s voice announced firmly “His job is to prove to people that love works.”

My father worried about his lack of success, voicing his concern that he didn’t know what it was about him that brought failure where others less talented had achieved success. On Sunday he let me tell him this: “There’s so much good in you, Dad, but the world is full of things that see good and pile dirt all over it. It’s really hard to love somebody without leaving an opening back the other way. One of the great frustrations in my life has been that every time I tried to reciprocate your caring was that you shut me out, as though there was something frightening inside of you that you wanted to protect me from. I’m sorry if I became angry with you at times.

“There are some things about loving that a man can learn only from a father. Next time, find a good father, Dad. It will be a wonderful life.”

Software and Agility

Back in the ’80s, when the Capability Maturity Model (CMM) movement was gathering steam, surveys reported that half of all software projects failed. Even today, a significant number of developers report that they have never worked on a successful software project. I’ve written about the relationship between this problem and Moore’s law in the past, but hucksters selling cure-alls don’t have time to investigate root causes.

This is evident most often in comparisons of development methodologies. Historically, corporate America applied the “Waterfall Model”, a name coined by Winston Royce. Royce identified seven critical activities in software development: systems requirements, software requirements, analysis, design, implementation, verification and operation. The seven follow a definite chain of information dependencies, suggesting the “waterfall” analogy. But Royce himself observed that no project followed that sequence. There were all kinds of feedback loops from later stages to earlier stages.

What is astonishing to me is that later practitioners removed the first and last step. This tends to support amnesia about the evolution of the institutions that software developers support. Prior to World War II, most businesses were dominated by “tribal knowledge” of their operations. Goals were set from on high, but implementation was organic and often opaque. That changed in the 50s: confronted with the daunting logistics of WW II, the armed services formed a logistical planning office and trained practitioners. It was these men, including Robert McNamara, who went out and transformed the practices of corporate management in the 50s.

Thus the importance of the “systems requirements” stage of the waterfall process. Information systems were being injected into organizations whose theory of operation was vastly different from actual performance. Initial users of structured analysis, for example, discovered that many significant decisions were made by white-collar workers loitering around the water cooler, bypassing the hierarchical systems of reporting required by their organizational structure. Deploying an information system that enforced formal chains of authorization often disrupted that decision making, and organizations suffered as a result.

The common charge leveled against the Waterfall model is that the requirements are never right, and so attempts to build a fully integrated solution are doomed to fail. This has led to models, such as Agile and Lean software development, that promote continuous delivery of solutions to customers. But remember what supports that delivery: ubiquitous networking and standard software component models (including J2EE, Spring, SQL databases, and .NET) that allow pieces to be replaced dynamically while systems are operating. Those technologies didn’t exist when the waterfall model was proposed. And when they did arrive, proponents of the model immediately suggested a shift to “rapid prototyping” activities that would place working code before key end users as early in the project as possible. The expectation was that the politically fraught early stages of requirements discovery could then be avoided.

Actually, this might be possible at this point in time. Information systems provide instrumentation of operations to the degree that SAP now advertises the idea that they allow businesses to manifest a “soul.” Web service architectures allow modified applications to be presented to a trial population while the old application continues to run. Technology may now be capable of supporting continuous evolution of software solutions.

But removing the systems requirements stage of the process leaves this problem: where do requirements come from? Watching the manipulation of statistics by our presidential candidates, only the naive would believe that the same doesn’t occur in a corporate setting. Agile and Lean models that promise immediate satisfaction weaken the need for oversight of feature specification, perhaps opening the door to manipulation of application development in support of personal ambitions among the management team.

Control of such manipulation will be possible only when integrated design is possible – where the purpose of implementing a feature is shown in the context of a proposed operation. Currently that kind of design is not practiced – although Diagrammatic Programming has demonstrated its possibility.

In our current context, however, the wisdom of the CMM is still to be heeded. In a comment to an author pushing Agile over Waterfall development, I summarized the CMM’s five stages as follows:

  1. Define the boundary around your software process, and monitor and control the flow of artifacts across that boundary.
  2. Require that each developer describe his or her work practices.
  3. Get the developers to harmonize their practices.
  4. Create a database to capture the correlations between effort (3) and outcomes (1).
  5. Apply the experience captured in (4) to improve outcomes.

This is just good, sound, evidence-based management, and the author thanked me for explaining it to him. He had always thought of the CMM as a waterfall enforcement tool, rather than as management process.

And for those arguing “Waterfall” vs. “Agile” vs. “Lean”: if you don’t have CMM-based data to back up your claims, you should be clear that you’re really involved in shaking up organizational culture.

Abuse in the Linux Kernel Community

Proclamations of concern over the abusiveness of the Linux Kernel Community of been growing louder in the open-source world. Steven Vaughn-Nichols summarizes the concerns in Computerworld. My comment on the matter?


Ancandune remarks wisely on the problem that “rude and hostile” imposes to the transmission of knowledge. I do not necessarily subscribe to his characterization of the motivating psychology. Perfectionists are driven by their own set of hostile interior voices. They don’t just produce something and throw it over the wall – they lie awake at night thinking about all the ways it can blow up in their face. What Linus may be attempting to demonstrate in his communications is how he goes about thinking when he writes code.

Is Linus a healthy person? That’s for him to judge.

The important question is whether the community is healthy. Steve Jobs and Bill Gates had boardrooms filled with over-sized egos to help them manage their succession plans. What is Linus going to do? Anoint a successor? Or will the community devolve into a WWF RAW! donnybrook with the last man standing holding the belt? Another possibility is that the corporations that finance many contributors will step in and appoint a successor.

Linus’s authority arose organically over many years. The community allows him the right to be critical. But it is not being critical of others that conditions his success – it is his ability to think critically. The community should recognize that distinction, and mercilessly criticize and purge those that emulate his style without bearing his gifts or responsibilities.


To illustrate my point regarding self-criticism, here’s the content of an e-mail characterizing a problem we had with the build at work recently:

It’s the usual stupidity – I don’t even remember why I created this file, but it’s just a copy of MotorIDCommander.cpp. It was probably intended to link AutoCommCommander with MotorIDCommander, but I never modified the contents.

Anyways – it’s excluded from the build in debug mode but not in release mode. Khalid is off at physical therapy today with the project file checked out, so I can’t publish a fix. If you can do it locally, that would get you moving forward.

Sorry

Brian

This is like the fourth or fifth time I’ve done this – left a file in the build for release mode after excluding it in debug mode.