The Mythology of Programming Language Ideas

Tomas Patricek offers a stimulating analysis of program language design in the framework of science as a practice. As tools advance, later generations often deride their predecessors as “unscientific,” seeing their theories as myth. This is a point that I have advanced in defense of ancient philosophers and theologians: they were thinking rigorously within the limitations of the evidence that they could perceive. More, their thinking encompassed types of experience (what we call “spiritual”) that modern scientists, trapped in materialism, fail to honor.

Patricek is particularly interested in the evolution of programming languages, which are subject to rigorous scientific analysis both as regards expressiveness and efficiency. My comment to him:

I greatly enjoyed your article. I do have one specific vision regarding the future: programming language design is about bridging the mismatch between the digital and organic perceptions of reality. For much of the history of programming languages, the burden was on the organic participants to conform to the limitations of digital devices. That boundary is shifting rapidly to allow digital devices to interpret utterances of non-programmers.

Within any one paradigm for adaption between the two domains of perception, “developers” (which may include the general public) are not really involved in science as  a search for first principles that constrain possibilities. Rather, they are exploring and evolving an ecosystem. An analogy is the human genome which can be understood – but probably not justified in scientific terms (missing initial conditions), nor optimized in engineering terms (due to complex functional dependencies).

Identity Crisis

So I’ve been refreshing my Java skills, working through Deitel and Deitel’s “Java Standard Edition 8” training material. The first seven chapters have been pretty easy going, but I’ve been doing the usual – blowing out the simple coding examples so that they actually model the real world.

For example, when simulating shuffling a deck of cards, the sample code simply takes the entire deck from top to bottom, and swaps the next card with a random one below it. Of course this violates the way that a real shuffle works. In a real shuffle, the cards at the top of the two stacks of the cut end up closer to the top. So I wrote a random shuffle algorithm that simulates the cut, and merges the two by taking cards randomly from each stack until one is exhausted.

The next assignment is to capture some statistics on a set of test scores. It’s a pretty simply problem: minimum and maximum values and the average. But you know where that goes: at the end of the term, the scores for all the assignments have to be rolled up into some final grade. This seemed like an interesting problem – coming up with some general mechanism for aggregating scores into a final grade.

We all know how terms start: the teacher hands out a syllabus with a weighting for each element of the course work: homework, quizzes, mid-terms, papers and finals are typical elements. Each element is given an expected weighting to the final grade.

Of course, it never works out that way. Some midterms are harder than others, but each should contribute the same weight to the final grade. This is sometimes accomplished by weighting the test scores so that the averages are the same. And what if the students move through the material faster or slower than in prior years? Might they not complete more or less assignments than expected?

So this simple little fifty-line program became a ten module monster. I can’t entirely blame my son Gregory for the damage done by my interview with him on grading policies at the JC he’s attending. But he did bring up a really interesting point: nobody but the professor knows the actual assignment scores. She produces a final letter grade, and that’s all that the records office knows.

We were trying to decide how to model this, and came up with the idea of the professor having a grade book with a private set of identifiers that link back to the student records held by the registrar. After each assignment is graded, the instructor looks up the grade book ID for the student, and adds the grade to the book against that ID. At the end of the term, the professor combines the scores to produce a class curve, and assigns a letter grade for each interval in the distribution. In the end, then, no student knows how close they were to making the cut on the next letter grade, so nobody knows whether or not they have a right to appeal the final grade.

In my code model, therefore, I have two kinds of people: students and instructors. Now we normally identify people by their names – every time you fill out a form, that information goes on it. But sometimes names change.

In the grade book, of course, we also want identities to remain anonymous. We need mechanisms to make sure that IDs are difficult to trace back to the person being described. The NSA did this with records subpoenaed from the phone carriers – though nobody was convinced that the NSA wasn’t bypassing the restrictions that were supposed to prevent names from being linked to the phone calls until a warrant was obtained from a court. In the case of my simple gradebook model, it’s accomplished by making the class roster private to the “Instructor” class.

This all got me to thinking about how slippery “identity” is as a concept. It can be anything from the random number chose by the instructor to a birth certificate identifier to a social security number to a residence. All of these things provide some definite information about a person, information that can be used to build a picture of their life. Some of it is persistent: the birth certificate number. Other identities may change: the social convention is that a woman changes her name when she marries. And in today’s mobile world, we all change residences frequently. A surprising change in my lifetime has been that my phone number doesn’t change when I change residence, and the phone number is a private number, where once it was shared with seven people.

So as I was modelling the grade book, I found myself creating an “Instructor” class and a “Student” class, and adding a surname and given name to both. I hate it when this happens, and in the past I would have created a “Person” that would capture that information, and make “Student” and “Instructor” sub-classes of Person. But that always fails, of course, as what happens when an instructor wants to sign up for an adult education class?

And so I hit upon this: what if we thought of all of these pieces of identifying information as various forms of an “Identity”? Then the instructor and student records each link to the identity which could be a “Personal Name.” That association of “Personal Name” with “Instructor” or “Student” reflects a temporary role for the person represented by the identity. That role may be temporary, which means that we need to keep a start and end date for each role. And the role itself may be identifying information – certainly a student ID is valid to get discount passes at the theater, for example.

The subtlety is that addresses and old phone numbers are reassigned to other people every now and then. The latter was a frequent hassle for people that got the phone number last held by a defunct pizza take-out. And it’s even worse for the family living right in the middle of America, which is the default address for every internet server that can’t be traced to a definite location. The unfortunate household gets all kinds of writs for fraud committed by anonymous computer hackers.

But I really wish that I had a tool that had allowed me to maintain a database with all of this information in it. I don’t think that I can reconstruct my personal history at this point. As it is, what I have in my personal records is my current identity: my credit card numbers (which BofA fraud detection keeps on replacing), my current address and phone number, my current place of employment. That is all that the computer knows how to keep.

With the upshot that I know far less about myself than the credit agencies do.

CLR’ing Away the .NET

When Microsoft first began offering “component” technology to the world, it was a result of unmanaged programming. I don’t mean “unmanaged” in the technical sense (more on that later). I mean “unmanaged” in the original sense: after smashing its office suite competition by tying a full-featured release of Office to Windows 3.1, Microsoft found its development teams competing with each other. When Word users asked for limited spread-sheet capabilities, the Word developers began recreating Excel. When Outlook users asked for sophisticated text formatting for the e-mails, the development team began recreating Word.

Now this reflects two things. The first and most significant is that the code base created by the application teams was not modular. The Word team should have been able to lift the basic spread-sheet engine right out of the Excel code base. The second was that Microsoft had a schizophrenic developer base. Visual Basic developers enjoyed the features of an interpreted language that allowed them to modify code during execution, while Visual C++ developers enjoyed the benefits of high-speed execution. Unfortunately, those two worlds didn’t talk well to each other. C++ uses ‘0’ to locate the first item in a list, while VB uses ‘1.’

Microsoft’s attempt to bridge these gulfs was their Component Object Model. As a response to the duplication of code, it was a little heavy: COM enabled users to add an Excel spreadsheet in Word, but at the cost of starting a full instance of Excel in the background, doubling the memory requirements. By contrast, I saw a demonstration of IBM SOM Objects at a software engineering conference in 1997 that brought a spreadsheet into a text editor while adding only 3% to the memory burden.

At that conference, IBM touted a world in which a graduate student could write add-ins to popular business applications. This was obviously not in the interests of Microsoft, whose dominance of the office application market fueled its profits. This was evident in their implementation of COM. When adding a new component to the operating system, the component registers its services (in the “Windows Registry,” of course). Microsoft published its Office services so that developers of enterprise applications could automatically create Word documents and Excel spreadsheets. That should have meant that other teams could create alternative implementations of those interfaces. To hobble that strategy, Microsoft did not include a reverse lookup capability in its registry. In other words, if you wanted to let your user pick which dictionary to use for a spell-checker, there was no way to find out which installed components provided a “Dictionary” service. You had to walk the entire registry and ask each component in turn whether it was a “Dictionary.” This was not a cheap operation: when I tried it in 1998, it took almost a minute.

On top of this, Microsoft biased its component technology to the VB developer, assuming that C++ developers were sophisticated enough to work around the inconsistencies. This was not an minor burden. What took three lines of code in VB could take a page in C++.

However, COM and its successor DCOM were the latest shiny toy, and many C++ developers flocked to the technology, even for pure C++ implementations. I was scandalized, because C++ had its own methods for creating reusable modules, methods that lay at the foundations of COM and DCOM underneath the cruft of type conversions and named method invocations. I finally found an article on MSDN that warned that COM and DCOM should only be used for systems that were configured dynamically. This included, famously, Visual Basic, host to a rich market of third-party user interface controls (known as “ActiveX” controls). But Microsoft’s advice was not heeded by the industry, and even today I am repackaging COM components as dynamically loaded libraries (DLLs) that publish native C++ objects.

I must admit that over time the work demanded of the C++ developer has moderated. Visual Studio can generate C++ interfaces using a COM “type library,” and allows users to decorate a class declaration with symbols that allow tools to automatically generate the COM wrappers that publish code to VB.

Unfortunately, the field tilted against the C++ developer when Microsoft introduced its .NET technology. One of the major charges leveled against C++ over the years is that developers need to explicit manage the resources consumed by their programs. Memory in particular is a bugaboo, and one of the major challenges of writing a complex C++ application is ensuring that memory doesn’t “leak.” This frustration was catered to by the creators of Java and other “managed” languages (including Microsoft’s C#). Unfortunately, it encourages the fiction that memory is the only resource that developers need to manage, a fiction that is addressed explicitly in the documentation of Java and C# class libraries that open network connections or access external components such as databases.

Be that as it may, Microsoft had to decide whether to continue to support new technologies, such as HTML 5 and XML, upon the fundamental foundations of the machine architecture, or within the higher-level abstractions of the managed world. The overwhelming popularity of managed languages drove the choice. Microsoft no longer delivers full-featured libraries for C++ developers. For a long time, they could only access those features through the clumsy methods of COM programming.

This came to a head for my team last year when trying to implement a new feature that required parsing of XML files. A demonstration application was quickly written in C#, but as the effort to access that from our C++ code was prohibitive, we went looking for a third-party XML library. We couldn’t find one that did the job.

The lack of support for C++ libraries has created some distressing contradictions. C++ developers have always been proud to claim that code written in C++ runs twice as fast as code written in a managed language. Recent studies reveal, however, that processing a large file, such as sensor data produced by a networked device or log files from a cloud applications, is dominated by the time it takes to read the file. The C++ libraries appear to take twice as long to read the file as the C# libraries.

Driven by this evidence to seek new methods for using .NET libraries in C++ code, I finally came upon Microsoft’s C++/CLI or CLR technology. In CLR, the developer has direct control over whether his objects are managed or unmanaged. This means that the speed of C++ execution can be leveraged when necessary, while also allowing access to the rich .NET libraries maintained by Microsoft. Originally CLR was advanced as a technology for migrating C++ applications to the .NET libraries, but it turned out that there were too many inconsistencies between the run-time environment established for native C++ applications and .NET applications.

But what about those of use with a million lines of code that runs within the native C++ execution environment? Is there no bridge?

I am glad to report that we have found it. It is to create a CLR library that exports unmanaged objects using the classic DLL methods that COM originally supplanted. The unmanaged objects wrap managed .NET components, and use C++/CLI methods to convert between C++ and .NET data types.

I am certain that there are some constraints on this strategy, particularly when trying to integrate .NET and C++ components in user interfaces or attempting to ensure data consistency during simultaneous execution in both environments. But for simple operations that drop into the managed world temporarily, it seems to work just fine.

And I find some joy in coming full circle, with only a few lines of code being able once again to write code as a C++ developer should, rather than as a second-class citizen in a market targeting to developers solving far simpler problems than I confront every day.

Up in the Cloud

Information Systems, the discipline of organizing computers and software resources to facilitate decision-making and collaboration, is undergoing a revolution. The opportunity is allowed by cheap data storage and high-speed networking. The necessity is driven by the unpredictability of demand and the threat of getting hacked. These factors have driven the construction of huge data and compute centers that allow users to focus on business solutions rather than the details of managing and protecting their data.

As a developer, this proposition is really attractive to me. I’m building a sensor network at home, and I’d like to capture the data without running a server full time. I’d also like to be able to draw upon back-end services such as web or database servers without having to install and maintain software that is designed for far more sophisticated operations.

The fundamental proposition of the cloud is to create an infrastructure that allows we as consumers to pay only for the data and software that we actually use. In concept, it’s similar to the shift from cooking on a wood-fired stove fed by the trees on our lot to cooking on an electric range. Once we shift to electricity, if we decide to open a restaurant, we don’t have to plan ahead ten years to be certain that we have enough wood, we just pay for more electricity. Similarly, if I want to develop a new solution for home heating control, I shouldn’t have to pay a huge amount of money for software licenses and computer hardware up front – that should be borne by the end-users. And, just as a chef probably doesn’t want to learn a lot about forestry, so I shouldn’t have to become an expert in administration of operating systems, databases and web servers. Cloud services promise to relieve me of that worry.

It was in part to assess the reality of that promise that I spent the last two days at Microsoft’s Cloud Road Show in Los Angeles. What I learned was that, while they pursue the large corporate customers, Microsoft is still a technology-driven company, and so they want to hear that they are also helping individual developers succeed.

But there were several amusing disconnects.

Satya Nadella took the helm at Microsoft following Steve Balmer’s debacles with Windows 8 and Nokia. Balmer was pursuing Apple’s vision of constructing a completely closed ecosystem of consumer devices and software. Nadella, head of the Azure cloud services effort, blew the top off of that plan, declaring that Microsoft would deliver solutions on any hardware and operating system that defined a viable market. Perversely, what I learned at the roadshow was that Microsoft is still very much committed to hardware, but not the kind of hardware you can carry on your person. Rather, it’s football fields stacked three-high with shipping containers full of server blades and disk drives, each facility drawing the power consumed by a small city. None of the containers belongs to a specific customer (actually the promise is that your data will be replicated across multiple containers). They are provisioned for aggregate demand of an entire region, running everything from a WordPress blog to global photo-sharing services such as Pinterest.

This scale drives Microsoft to pursue enterprise customers. This is a threat to established interests – large data centers are not an exportable resource, and so provide a secure and lucrative source of employment for their administrators. But that security comes with the pressure of being a bottleneck in the realization of others’ ambitions and a paranoid mind-set necessary to avoid becoming the latest major data-breach headline. The pitch made at the roadshow was that outsourcing those concerns to Microsoft should liberate IT professionals to solve business problems using the operations analysis software offered with the Azure platform.

To someone entering this magical realm, however, the possibilities are dizzying. At a session on business analytics, when asked what analysis package would be best to use for those looking to build custom algorithms, the response was “whatever tool your people are familiar with.” This might include R (preferred by statistics professionals) or Python (computer science graduates) or SQL (database developers). For someone looking to get established, that answer isn’t comforting.

But it reveals something else: Microsoft is no longer in the business of promoting a champion – they are confident that they have built the best tools in the world (Visual Studio, Office, Share Point, etc.). Their goal is to facilitate delivery of ideas to end customers. Microsoft also understands that means long-term maintenance of tightly coupled ecosystems where introduction of a malfunctioning algorithm can cost tens of millions of dollars, and viruses billions.

But what about the little guy? I raised this point in private after a number of sessions. My vision of the cloud is seeded by my sons’ experience in hacker communities, replete with “how-to” videos and open-source software modules. I see this as the great hope for the future of American innovation. If a living space designer in Idaho can source production of a table to a shop in Kentucky with a solid guarantee of supply and pricing comparable to mass-produced models, then we enter a world in which furniture showrooms are a thing of the past, and every person lives in a space designed for their specific needs. As a consumer, the time and money that once would have been spent driving around showrooms and buying high-end furniture is invested instead in a relationship with our designer (or meal planner, or social secretary).

Or how about a “name-your-price” tool for home budgeting? If you’ve got eighty dollars to spend on electricity this July, what should your thermostat setting be? How many loads of laundry can you run? How much TV can you watch? What would be the impact of switching from packaged meals to home-cooked? Can I pre-order the ingredients from the store? Allocate pickup and preparation time to my calendar?

Development of these kinds of solutions is not necessarily approachable at this time. The low-end service on Azure runs about $200 a month. From discussion, it appears that this is just about enough to run a Boy Scout Troop’s activity scheduling service. But I am certain that will change. Microsoft responded to the open-source “threat” by offering development tools and services for free to small teams. Their Azure IoT program allows one sensor to connect for free, with binary data storage at less than twenty dollars a month.

At breakfast on Wednesday, I shared some of these thoughts with a Microsoft solutions analyst focused on the entertainment industry. I ended the conversation with the admission that I had put on my “starry-eyed philosopher” personality. He smiled and replied “You’ve given me a lot to think about.” It was nice to spend some time with people that appreciate that.

Father, Finally

My father is in the final stages of his journey here. For the last month, he has been surrendering to the prostate cancer that is invading his bones. His principal fear has been of being a burden to my mother, and so he has methodically tried to further the process. The degradation of his sense of taste is facilitating his resolve. It is clear that his extremities are being consumed in the effort to maintain the operation of his heart, lungs and brain.

I could mourn the loss of his brilliant intellect, but that intellect was a mixed blessing to his intimates. It was a very powerful tool that supported convictions that could lead to harsh judgments. What I am finding instead is that as he weakens and submits to confusion, for the first time in my life I am able to proffer simple acts of tenderness. Stroking his head, rubbing his chest over his heart, holding his hand: these have been rewarded by looks of wonder.

I was caught up, for much of my life, in my father’s ambitions for programming. On the title bar, the “Programming” link offers entries that introduce his philosophy of design. It is my own formulation: my father adopted obscure terminology to ensure precision of meaning, and believed that practice under his tutelage was essential to competence. In fact, inspired by Hesse’s “The Glass Bead Game”, his vision of a training center was a monastery. Having grown up with Diagrammatic Programming, when I joined him in the family business in 1995, I rapidly began to innovate. He found this intolerable, and when I finally had the opportunity to articulate my logic to him, his retort was “Well, it’s clear that if you talk long enough, Brian, you could convince people of anything.”

My mother dreaded our conversations. Even as recently as a few months ago, she would retreat into her office when I came by to visit him. I recognized the dynamic that evolved between us, but also saw that the problem was far more complex than just our personal history. During a transfer to the residents of ownership of the mobile home park property, my father fought a tremendous legal and spiritual battle with the lawyers seeking to maximize the developer’s profits at the cost of displacing old friends. My father eventually shared that the lead lawyer was ticketed on a DC10 that crashed when the cabin door popped open in flight, but chose at the last minute not to board. (Yes, a textbook case of misdirected anger.) I had my own struggle with the family law community that cultivated fear on the 7th floor of the Van Nuys court house. After one conversation with my father, I heard the thoughts of one of them admitting of me, “He’s far stronger than we’ve given him credit for.” Eventually I used my father to send a message back: “I’ve done what I’ve done in order that it couldn’t be said that people weren’t given a chance to do the right thing.”

In spite of his spiritual capacities, my father always pooh-poohed my own experiences. I received several clues as to his motivations over the years. Having suffered the traumatic losses of John and Robert Kennedy and Martin Luther King Jr., he observed once that “All the good people get killed.” Although he was bailed out of the financial consequences of his own ambitions by an inheritance from a distant aunt, he worried about my financial insecurity, and may have considered wasteful my itinerant attendance at churches throughout the Conejo Valley.

But there was a deeper aspect to the problem that became clear only in 2008 when I went out to the Netherlands on a business trip. As I stepped to the visa counter in Amsterdam, I caught the thought “Well, [the Americans] are finally producing real people.” I immediately entered a warm and open relationship with the engineers we had come to visit, and a couple of nights into the trip, I woke up to them poking around in my mind. They found my father, and showed me behind him the tomb of an ancient Germanic king, still struggling to retain control of his line.

My father never had a father. Grandfather Balke left my grandmother, at the time a professional ballet dancer and later an anesthesiologist, after my father was born. From my father’s response to my physical affection, I came to see that the lack of a father was the wound that his antagonists, both ancient and modern, used to attempt to control him and his children.

That realization brought me back to a day when, returning to work after lunch, I waited at a stop light outside the executive suites rented by my brother. The usual argument over priorities was raging in my head. Suddenly, a wave of energy moved through my mind from left to right. Both the stop light and the radio in my car went dead at the same instant, and a woman’s voice announced firmly “His job is to prove to people that love works.”

My father worried about his lack of success, voicing his concern that he didn’t know what it was about him that brought failure where others less talented had achieved success. On Sunday he let me tell him this: “There’s so much good in you, Dad, but the world is full of things that see good and pile dirt all over it. It’s really hard to love somebody without leaving an opening back the other way. One of the great frustrations in my life has been that every time I tried to reciprocate your caring was that you shut me out, as though there was something frightening inside of you that you wanted to protect me from. I’m sorry if I became angry with you at times.

“There are some things about loving that a man can learn only from a father. Next time, find a good father, Dad. It will be a wonderful life.”

Software and Agility

Back in the ’80s, when the Capability Maturity Model (CMM) movement was gathering steam, surveys reported that half of all software projects failed. Even today, a significant number of developers report that they have never worked on a successful software project. I’ve written about the relationship between this problem and Moore’s law in the past, but hucksters selling cure-alls don’t have time to investigate root causes.

This is evident most often in comparisons of development methodologies. Historically, corporate America applied the “Waterfall Model”, a name coined by Winston Royce. Royce identified seven critical activities in software development: systems requirements, software requirements, analysis, design, implementation, verification and operation. The seven follow a definite chain of information dependencies, suggesting the “waterfall” analogy. But Royce himself observed that no project followed that sequence. There were all kinds of feedback loops from later stages to earlier stages.

What is astonishing to me is that later practitioners removed the first and last step. This tends to support amnesia about the evolution of the institutions that software developers support. Prior to World War II, most businesses were dominated by “tribal knowledge” of their operations. Goals were set from on high, but implementation was organic and often opaque. That changed in the 50s: confronted with the daunting logistics of WW II, the armed services formed a logistical planning office and trained practitioners. It was these men, including Robert McNamara, who went out and transformed the practices of corporate management in the 50s.

Thus the importance of the “systems requirements” stage of the waterfall process. Information systems were being injected into organizations whose theory of operation was vastly different from actual performance. Initial users of structured analysis, for example, discovered that many significant decisions were made by white-collar workers loitering around the water cooler, bypassing the hierarchical systems of reporting required by their organizational structure. Deploying an information system that enforced formal chains of authorization often disrupted that decision making, and organizations suffered as a result.

The common charge leveled against the Waterfall model is that the requirements are never right, and so attempts to build a fully integrated solution are doomed to fail. This has led to models, such as Agile and Lean software development, that promote continuous delivery of solutions to customers. But remember what supports that delivery: ubiquitous networking and standard software component models (including J2EE, Spring, SQL databases, and .NET) that allow pieces to be replaced dynamically while systems are operating. Those technologies didn’t exist when the waterfall model was proposed. And when they did arrive, proponents of the model immediately suggested a shift to “rapid prototyping” activities that would place working code before key end users as early in the project as possible. The expectation was that the politically fraught early stages of requirements discovery could then be avoided.

Actually, this might be possible at this point in time. Information systems provide instrumentation of operations to the degree that SAP now advertises the idea that they allow businesses to manifest a “soul.” Web service architectures allow modified applications to be presented to a trial population while the old application continues to run. Technology may now be capable of supporting continuous evolution of software solutions.

But removing the systems requirements stage of the process leaves this problem: where do requirements come from? Watching the manipulation of statistics by our presidential candidates, only the naive would believe that the same doesn’t occur in a corporate setting. Agile and Lean models that promise immediate satisfaction weaken the need for oversight of feature specification, perhaps opening the door to manipulation of application development in support of personal ambitions among the management team.

Control of such manipulation will be possible only when integrated design is possible – where the purpose of implementing a feature is shown in the context of a proposed operation. Currently that kind of design is not practiced – although Diagrammatic Programming has demonstrated its possibility.

In our current context, however, the wisdom of the CMM is still to be heeded. In a comment to an author pushing Agile over Waterfall development, I summarized the CMM’s five stages as follows:

  1. Define the boundary around your software process, and monitor and control the flow of artifacts across that boundary.
  2. Require that each developer describe his or her work practices.
  3. Get the developers to harmonize their practices.
  4. Create a database to capture the correlations between effort (3) and outcomes (1).
  5. Apply the experience captured in (4) to improve outcomes.

This is just good, sound, evidence-based management, and the author thanked me for explaining it to him. He had always thought of the CMM as a waterfall enforcement tool, rather than as management process.

And for those arguing “Waterfall” vs. “Agile” vs. “Lean”: if you don’t have CMM-based data to back up your claims, you should be clear that you’re really involved in shaking up organizational culture.

Abuse in the Linux Kernel Community

Proclamations of concern over the abusiveness of the Linux Kernel Community of been growing louder in the open-source world. Steven Vaughn-Nichols summarizes the concerns in Computerworld. My comment on the matter?

Ancandune remarks wisely on the problem that “rude and hostile” imposes to the transmission of knowledge. I do not necessarily subscribe to his characterization of the motivating psychology. Perfectionists are driven by their own set of hostile interior voices. They don’t just produce something and throw it over the wall – they lie awake at night thinking about all the ways it can blow up in their face. What Linus may be attempting to demonstrate in his communications is how he goes about thinking when he writes code.

Is Linus a healthy person? That’s for him to judge.

The important question is whether the community is healthy. Steve Jobs and Bill Gates had boardrooms filled with over-sized egos to help them manage their succession plans. What is Linus going to do? Anoint a successor? Or will the community devolve into a WWF RAW! donnybrook with the last man standing holding the belt? Another possibility is that the corporations that finance many contributors will step in and appoint a successor.

Linus’s authority arose organically over many years. The community allows him the right to be critical. But it is not being critical of others that conditions his success – it is his ability to think critically. The community should recognize that distinction, and mercilessly criticize and purge those that emulate his style without bearing his gifts or responsibilities.

To illustrate my point regarding self-criticism, here’s the content of an e-mail characterizing a problem we had with the build at work recently:

It’s the usual stupidity – I don’t even remember why I created this file, but it’s just a copy of MotorIDCommander.cpp. It was probably intended to link AutoCommCommander with MotorIDCommander, but I never modified the contents.

Anyways – it’s excluded from the build in debug mode but not in release mode. Khalid is off at physical therapy today with the project file checked out, so I can’t publish a fix. If you can do it locally, that would get you moving forward.



This is like the fourth or fifth time I’ve done this – left a file in the build for release mode after excluding it in debug mode.

Domain Domination

As a person with broad intellectual interests, I might be an anachronism. One of the problems of free market economics is that it exploits our strengths and exacerbates our weaknesses. People that seek a healthy balance don’t fit naturally in the system. Fortunately, I took up my career as a software developer during a sweet spot of sorts – enough infrastructure had been established that we don’t have to worry about the details of how a computer manages memory and peripherals or does arithmetic on different data types, but the industry had not yet become a self-sustaining economic system driven by the generation and sharing of digital data. As a generalist, then, I was valuable as a translator between the digital realm and the “normal” world.

I was struck by the magic of the digital reality. My father enjoyed sharing stories of how he could make programs break in the early days by abusing their input devices, but by the time I had come on the scene, the electrical engineers had succeeded in creating a world in which the computer never seemed to get tired, made your mess disappear without fuss, and always did exactly what you asked. Knowing men, I wasn’t surprised that many were seduced completely by that fantasy. In my case, I was seduced by the fact that if you knew a little about software, you could get any productive person to talk to you in the hopes that they could partner to parlay their expertise into dot-com fortune.

In translating those conversations into software, I was fortunate to have object-oriented development methods to exercise.  It allows me to create software abstractions that correspond well with the goals of my users. In engineering applications, concerned with the operation of actual machinery, object-oriented methods are a particularly strong fit.

That’s not so much the case in the software industry today. Companies such as Google and Facebook have managed to compile huge stores of data, and aspire to correlate that information with economic activity. There’s really no definite theory behind those explorations, so we’ve seen the rise of languages that describe efficiently algorithms that filter, transform and correlate random pieces of data.

The recruiting challenge facing engineering companies is lampooned in a GE ad in which a new hire finds himself competing for attention against the developer of a mobile app that puts fruit hats on pictures of your pet. GE is competing against nascent monopolies (Google and Facebook again the exemplars) that throw money at developers just to keep them out of the hands of their competitors. I faced the same challenge when seeking to grow my current team.

But when exploring the technologies (Haskell, Clojure, and others) used by Google and others for analysis of large data stores, what struck me most was how terribly dry they are. There’s no sense of connection to people and the choices that they make. To me that takes a lot of fun out of my practice.

This has been expressed in my working through of the examples in Troelson’s Pro C# and the .NET 4.5 Framework. Confronted with examples with names like “ExtractAppDomainHostingThread” and “MyAsyncCallbackMethod”, I found myself figuratively tearing out my hair. Yes, these names are self-documenting, in the sense that they forecast accurately what we find in the code, but they aren’t even entertaining much less actually fun.

When Troelson begins exploring how .NET supports an application that has to perform many separate tasks in parallel, he introduces a class called Printer that writes a number to the screen and then waits a short time before writing the next number. By running many Printers in parallel, we can see clearly the unpredictability of the results in the screen output.

Of course I am offended by this whole concept. No Printer in the world ever behaved like this. So, given this class that does something meaningless while wasting time, I renamed it “Useless.” Rather than invoking “PrintNumbers”, I tell my Useless class to “WasteTime.” As methods for corralling wayward tasks are advanced, I further the metaphor with methods such as “WanderIdly” and “LanguishInAQueue.”

My son and I meet most Saturdays for lunch at the Fresh Brothers in the Westlake Village Promenade. When he interrupted my exercises, I talked him through these examples, and he burst out laughing. Now that’s success.

So what’s the developer trapped in the digital world-view to do? My suggestion would be a return to assembly coding. At Los Alamos in the ’50s, my father picked up the habit of trying to read the consonant-rich listings. He would become mightily amused as he punctuated them with lip-smacks and shrill sirens, decorations evolved in the secret society of machine developers trapped on the isolated buttes of New Mexico.

Artificers of Intelligence

The chess program on a cell phone can beat all but the best human players in the world. It does this by considering every possible move on the board, looking forward perhaps seven to ten turns. Using the balance of pieces on the board, the algorithm works back to the move most likely to yield an advantage as the game develops.

These algorithms are hugely expensive in energetic terms. The human brain solves the same problem in a far more efficient fashion. A human chess player understands that there are certain combinations of pieces that provide leverage over the opposing forces. As opportunities arise to create those configurations, they focus their attention on those pieces, largely ignoring the rest of the board. That means that the human player considers only a small sub-set of the moves considered by the average chess program.

This advantage is the target of recent research using computerized neural networks. A neural net is inspired by the structure of the human brain itself. Each digital “node” is a type of artificial neuron. The nodes are arranged in ranks. Each node receives input values from the nodes in the prior rank, and generates a signal to be processed by the neurons in the next rank. This models the web of dendrites used by a human neuron to receive stimulus and the axon by which it transmits the signal to the dendrites of other neurons.

In the case of the human neuron, activation of the synapse (the gap separating axon and dendrite) causes it to become more sensitive, particularly when that action is reinforced by positive signals from the rest of the body (increased energy and nutrients). In the computerized neural network, a mathematical formula is used to calculate the strength of the signal produced by a neuron. The effect of the received signals and the strength of the generated signal is controlled by parameters – often simple scaling factors – that can be adjusted, node by node, to tune the behavior of the network.

To train an artificial neural network, we proceed much as we would with a human child. We provide them experiences (a configuration of pieces on a chess board) and give feedback (a type of grade on the test) that evaluates their moves. For human players, that experience often comes from actual matches. To train a computerized neural network, many researchers draw upon the large databases of game play that have been established for study by human players. The encoding of the piece positions is provided to the network as “sensory input” (much as our eyes do when looking at a chess board), and the output is the new configuration. Using an evaluative function to determine the strength of each final position, the training program adjusts the scaling factors until the desired result (“winning the game”) is achieved as “often as possible.”

In the final configuration, the computerized neural network is far more efficient than its brute-force predecessors. But consider what is going on here: the energetic expenditure has merely been front-loaded. It took an enormous amount of energy to create the database used for the training, and to conduct the training itself. Furthermore, the training is not done just once, because a neural network that is too large does not stabilize its output (too much flexibility) and a network that is too small cannot span the possibilities of the game. Finding a successful network design is a process of trial-and-error controlled by human researchers, and until they get the design right, the training must be performed again and again on each iteration of the network.

But note that human chess experts engage in similar strategies. Sitting down at a chess board, the starting position allows an enormous number of possibilities, too many to contemplate. What happens is that the first few moves determine an “opening” that may run to ten or twenty moves performed almost by rote. These openings are studied and committed to memory by master players. They represent the aggregate wisdom of centuries of chess players about how to avoid crashing and burning early in the game. At the end of the game, when the pieces are whittled down, players employ “closings”, techniques for achieving checkmate that can be committed to memory. It is only in the middle of the game, in the actual cut-and-thrust of conflict, that much creative thinking is done.

So which of the “brains” is more intelligent: the computer network or the human brain? When my son was building a chess program in high school, I was impressed by the board and piece designs that he put together. They made playing the game more engaging. I began thinking that a freemium play strategy would be to add animations to the pieces. But what about if the players were able to change the rules themselves? For example, allow the queen to move as a knight for one turn. Or modify the game board itself: select a square and modify it to allow passage only on the diagonal or in one direction. I would assert that a human player would find this to be a real creative stimulus, while the neural network would just collapse in confusion. The training set didn’t include configurations with three knights on the board, or restrictions on moves.

This was the point I made when considering the mental faculties out at Intelligence is not determined by our ability to succeed under systems of fixed rules. Intelligence is the measure of our ability to adapt our behaviors when the rules change. In the case of the human mind, we recruit additional neurons to the problem. This is evident in the brains of blind people, in which the neurons of the visual cortex are repurposed for processing of other sensory input (touch, hearing and smell), allowing the blind to become far more “intelligent” decision makers when outcomes are determined by those qualities of our experience.

This discussion, involving a game without much concrete consequence, appears to be largely academic. But there have been situations in which this limitation of artificial intelligence have been enormously destructive. It turns out that the targeting systems of drones employ neural networks trained against radar and visual observations of friendly and enemy aircraft. Those drones have misidentified friendly aircraft in live-fire incidents, firing their air-to-air missile and destroying the target.

So proclamations by some that we are on the cusp of true artificial intelligence are, in my mind, a little overblown. What we are near is a shift in the power allocated to machines that operate with a fixed set of rules, away from biological mechanisms that adapt their thinking when they encounter unexpected conditions. That balance must be carefully managed, lest we find ourselves without the power to adapt.

Staying Cool with R

Before returning to the control industry in 2008, I was employed in business systems development. My employer was hot to get in on the off-shore gambling business, but was kind enough to ask me what I was interested in. I offered my concern that people were overwhelmed with the demands imposed by 24/7 communications, to the point that their ability to actually immerse themselves in the experience of the moment was degrading. I thought that a system that guided them through reflection and looked for correlations between mood and experience might be the basis for helping them find people and places that would allow them to express their talents and find joy.

His reaction was to try to stake me at the gambling tables in Reno.

But he did recognize that I was motivated by a deep caring for people. That’s lead me into other directions in the interim. I’ve been trying to moderate the harsh tone in the dialog between scientists and mystics. I’ve accomplished about as much as I can – the resolution I have to offer is laid out in several places. I just need to let the target audience find the message.

So I’ve turned back to that vision. A lot has changed in the interim, most importantly being the unification of the Windows platform. This means that I can try to demonstrate the ideas in a single technology space. There’s only so many minutes in the day, after all.

I began with a review of statistical analysis. I’ve got a pair of books, bought back when I was a member of the Science Book of the Month club, on analysis of messy data. That provided me with the mathematical background to make sense of Robert Kabacoff’s R in Action. However it’s one thing to do analysis on the toy data sets that come with the R libraries. Real data always has its own character, and requires a great deal of curation. It would be nice to have some to play with.

One approach would be to begin digging into Bayesian language net theory and researching psychological assessment engines in preparation for building a prototype that I could use on my own. But I already have a pretty evolved sense of myself – I don’t think that I’d really push the engine. And I would really like to play with the Universal applications framework that Microsoft has developed. On top of that, the availability of an IoT (internet of things) build of Windows 10 for Raspberry Pi means that I can build a sensor network without having to learn another development environment.

So that plan is to deploy temperature and humidity sensors in my apartment. It’s a three-floor layout with a loft on the top floor. The middle floor contains a combination living/dining area and the kitchen. Both the loft and the kitchen have large sliders facing west, which means that they bake in the afternoon. On the bottom floor, the landing opens on one side to the garage and one the other side to my bedroom. The bedroom faces east behind two large canopies, although the willow tree allows a fair amount of light through. There’s a single thermostat on the middle floor. So it’s an interesting environment, with complicated characteristics.

While thermal balance also involves the state of windows, doors and appliances, I think that I can get a pretty good sense of those other elements by monitoring the air that flows around them. Being a hot yoga masochist, I’m also curious regarding the effect of humidity.

So I’ve got a Raspberry Pi on the way, and have installed Microsoft’s Visual Studio Community on my Surface Pro. Combination temperature and humidity sensors cost about ten dollars. While real-time data would be nice, I don’t think that for the purposes of my study I’ll need to link to the Wi-Fi to push the data out to a cloud server. I can use my laptop to upload it when I get home each day. And there’s some work to do in R: the time series analysis includes seasonal variations on annual trends, and I certainly expect my measurements to show that, but there will also be important diurnal variations. Finally, the activation of temperature control appliances (air conditioner and furnace) needs to be correlated with the data. I don’t want to invest in a Nest thermostat, or figure out how to get access to the data, so I’m going to see if I can use Cortana to post notes to my calendar (“Cortana – I just set the air conditioning to 74 degrees”).

Obviously there’s a lot to learn here. But no single piece is overwhelming until I get to the data analysis. Just cobbling together of small pieces. Should be fun! And if I can figure out how to manage my windows and doors and appliances to reduce my energy expenditures – well, that would be an interesting accomplishment.