Linked Exodus

Microsoft has announced that it is to buy Linked-In, the professional networking service, for $26.2 billion.

This blogger has learned that the CEOs of Apple, IBM, Google, Facebook and Oracle have combined to issue a request for proposal for a worm that will delete their employees from the Linked-In databases. The foremost responders are state insecurity services in Russia and North Korea.

Outside Microsoft’s Redmond headquarters, an elderly man in Biblical garb has been spotted carrying a paralyzed snake and chanting “Let my people go! Let my people go!” Meanwhile, at the Vatican, reacting to nanobot activity that reworked the ceiling of the Sistine Chapel to replace Adam with a robot, Pope Francis has offered pleas for the archangel Michael to slay the first-born AI of every major software company.

Oh, Tay, Can You See?

Microsoft put up a speech-bot name ‘Tay’ on Twitter last week, and it took less than twenty-four hours for it to become a sexist Nazi. While labelled as “artificial intelligence,” Tay did not actually understand what it was saying – it merely parroted the speech of other users. On the /4chan/pol feed, that includes a lot of dialog that most of us would consider inappropriate.

What distresses is that Microsoft hoped to have Tay demonstrate the conversational skills of a typical teenager. Well, maybe it did!

In a recent dialog on the “liar Clinton,” I probed for specific proof, and received back the standard Fox News sound bites. When I described the Congressional hearings on Bengazi, the accuser had the grace to be chastened. This is typical of so much of our political dialog: people parrot sayings without availing themselves of access to the official forums in which real information is exchanged. The goal is to categorize people as “us” or “other,” with the goal of justifying arrangements for the distribution of power that benefit the “us.”

Donald Trump is a master of this political practice. Apparently his campaign doesn’t do any polling. He simply puts up posts on Facebook, and works the lines that people like into his speeches.

So I worry: did Microsoft actually succeed in its demonstration? Most American teenagers don’t understand the realities of the Holocaust or the difficulties of living under a totalitarian regime. In that experiential vacuum, do they actually evolve dialog in the same way that Tay did – with the simple goal of “fitting in?”

Somewhat more frightening is that Donald Trump appears to employ algorithms not too different from Tay’s. For God’s sake, this man could be president of the most powerful country in the world! He’s got to have more going on upstairs than a speech bot!

Fortunately, many teenagers, when brought into dialog regarding offensive speech, actually appreciate receiving a grounding in fact. You’d hope that our politicians would feel the same.

Wish You Were There

Google has recently announced a “photo location” service that will tell you where a picture was taken. They have apparently noticed that every tourist takes the same photos, and so if they have one photo tagged with location, they can assign that location to all similar photos.

I’m curious, as a developer, regarding the nature of the algorithms they use. As a climate change alarmist, I’m also worried about the energy requirements for the analysis. It turns out that most cloud storage is used to store our selfies (whether still or video). Over a petabyte a day is added to YouTube, with the amount expected to grow by a factor of ten by 2020. A petabyte is a million billion bytes. By contrast, the library of Congress can be stored in 10 terabytes, or one percent of what is uploaded daily to YouTube.

Whatever Google is doing to analyze the photos, there’s just a huge amount of data to process, and I’m sure that it’s a huge drain on our electricity network. And this is just Google. Microsoft also touts the accumulation of images as a driver for growth of its cloud infrastructure. A typical data center consumes energy like a mid-size city. To reduce the energy costs, Microsoft is considering deployment of its compute nodes in the ocean, replacing air conditioning with passive cooling by sea water.

But Google’s photo location service suggests another alternative. Why store the photos at all? Rather than take a picture and use Google to remind you where you were, why not tell Google where you were and have it generate the picture?

When I was a kid, the biggest damper on my vacation fun was waiting for the ladies to arrange their hair and clothing when it came time to take a photo. Why impose that on them any longer? Enjoy the sites, relax, be yourself. Then go home, dress for the occasion, and send up a selfie to a service that will embed you in a professional scenery photo, adjusting shadows and colors for weather and lighting conditions at the time of your visit.

It might seem like cheating, but remember how much fun it was to stick your face in those cut-out scenes on the boardwalk when you were a kid? It’s really no different than that. And it may just save the world from the burdens of storing and processing the evidence of our narcissism.

Up in the Cloud

Information Systems, the discipline of organizing computers and software resources to facilitate decision-making and collaboration, is undergoing a revolution. The opportunity is allowed by cheap data storage and high-speed networking. The necessity is driven by the unpredictability of demand and the threat of getting hacked. These factors have driven the construction of huge data and compute centers that allow users to focus on business solutions rather than the details of managing and protecting their data.

As a developer, this proposition is really attractive to me. I’m building a sensor network at home, and I’d like to capture the data without running a server full time. I’d also like to be able to draw upon back-end services such as web or database servers without having to install and maintain software that is designed for far more sophisticated operations.

The fundamental proposition of the cloud is to create an infrastructure that allows we as consumers to pay only for the data and software that we actually use. In concept, it’s similar to the shift from cooking on a wood-fired stove fed by the trees on our lot to cooking on an electric range. Once we shift to electricity, if we decide to open a restaurant, we don’t have to plan ahead ten years to be certain that we have enough wood, we just pay for more electricity. Similarly, if I want to develop a new solution for home heating control, I shouldn’t have to pay a huge amount of money for software licenses and computer hardware up front – that should be borne by the end-users. And, just as a chef probably doesn’t want to learn a lot about forestry, so I shouldn’t have to become an expert in administration of operating systems, databases and web servers. Cloud services promise to relieve me of that worry.

It was in part to assess the reality of that promise that I spent the last two days at Microsoft’s Cloud Road Show in Los Angeles. What I learned was that, while they pursue the large corporate customers, Microsoft is still a technology-driven company, and so they want to hear that they are also helping individual developers succeed.

But there were several amusing disconnects.

Satya Nadella took the helm at Microsoft following Steve Balmer’s debacles with Windows 8 and Nokia. Balmer was pursuing Apple’s vision of constructing a completely closed ecosystem of consumer devices and software. Nadella, head of the Azure cloud services effort, blew the top off of that plan, declaring that Microsoft would deliver solutions on any hardware and operating system that defined a viable market. Perversely, what I learned at the roadshow was that Microsoft is still very much committed to hardware, but not the kind of hardware you can carry on your person. Rather, it’s football fields stacked three-high with shipping containers full of server blades and disk drives, each facility drawing the power consumed by a small city. None of the containers belongs to a specific customer (actually the promise is that your data will be replicated across multiple containers). They are provisioned for aggregate demand of an entire region, running everything from a WordPress blog to global photo-sharing services such as Pinterest.

This scale drives Microsoft to pursue enterprise customers. This is a threat to established interests – large data centers are not an exportable resource, and so provide a secure and lucrative source of employment for their administrators. But that security comes with the pressure of being a bottleneck in the realization of others’ ambitions and a paranoid mind-set necessary to avoid becoming the latest major data-breach headline. The pitch made at the roadshow was that outsourcing those concerns to Microsoft should liberate IT professionals to solve business problems using the operations analysis software offered with the Azure platform.

To someone entering this magical realm, however, the possibilities are dizzying. At a session on business analytics, when asked what analysis package would be best to use for those looking to build custom algorithms, the response was “whatever tool your people are familiar with.” This might include R (preferred by statistics professionals) or Python (computer science graduates) or SQL (database developers). For someone looking to get established, that answer isn’t comforting.

But it reveals something else: Microsoft is no longer in the business of promoting a champion – they are confident that they have built the best tools in the world (Visual Studio, Office, Share Point, etc.). Their goal is to facilitate delivery of ideas to end customers. Microsoft also understands that means long-term maintenance of tightly coupled ecosystems where introduction of a malfunctioning algorithm can cost tens of millions of dollars, and viruses billions.

But what about the little guy? I raised this point in private after a number of sessions. My vision of the cloud is seeded by my sons’ experience in hacker communities, replete with “how-to” videos and open-source software modules. I see this as the great hope for the future of American innovation. If a living space designer in Idaho can source production of a table to a shop in Kentucky with a solid guarantee of supply and pricing comparable to mass-produced models, then we enter a world in which furniture showrooms are a thing of the past, and every person lives in a space designed for their specific needs. As a consumer, the time and money that once would have been spent driving around showrooms and buying high-end furniture is invested instead in a relationship with our designer (or meal planner, or social secretary).

Or how about a “name-your-price” tool for home budgeting? If you’ve got eighty dollars to spend on electricity this July, what should your thermostat setting be? How many loads of laundry can you run? How much TV can you watch? What would be the impact of switching from packaged meals to home-cooked? Can I pre-order the ingredients from the store? Allocate pickup and preparation time to my calendar?

Development of these kinds of solutions is not necessarily approachable at this time. The low-end service on Azure runs about $200 a month. From discussion, it appears that this is just about enough to run a Boy Scout Troop’s activity scheduling service. But I am certain that will change. Microsoft responded to the open-source “threat” by offering development tools and services for free to small teams. Their Azure IoT program allows one sensor to connect for free, with binary data storage at less than twenty dollars a month.

At breakfast on Wednesday, I shared some of these thoughts with a Microsoft solutions analyst focused on the entertainment industry. I ended the conversation with the admission that I had put on my “starry-eyed philosopher” personality. He smiled and replied “You’ve given me a lot to think about.” It was nice to spend some time with people that appreciate that.

The Brain is God

Human beings can do really amazing things with their minds. For example, play short stop, which means fielding a ball reliably even when it’s never hit the same way twice. The complexity of that skill defies our understanding, so we just sit back and enjoy.

Less complex manifestations of the mind’s magic are treated as curiosities by the neuroscientists. There is, for example, the lady that dialed the time recording one day and was able to tell perfect time forever after. Oliver Sacks in The Man Who Mistook His Wife for a Hat describes twins that could do prime factorization up to eight digit numbers, apparently by “seeing” the collection of numbers. This was a skill that vanished when they were separated. And we have stories of people that could hear radio broadcasts, purportedly through the antenna of their dental fillings.

In attempting to explain these phenomena, the neurophysiologist evokes the breathless complexity of the brain. For example, it has been said that the information encoding of our brains exceeds the number of particles in the universe. Of course, that’s not really terribly impressive, because those particles also have states, so the brain could never capture the state of the universe. But it’s a nice number, very large, which creates a fuzzy assurance that there’s so much to be learned about the brain that we’ll eventually be able to settle all its unexplained manifestations.

Well, we’ve hit a roadblock. Recent analysis indicates strongly that we’ll never be able to simulate the brain. This is really terribly frustrating. Now those of us carrying the labels “schizophrenic” and “delusional” will never be able to pin the scientific materialists to the mat, forcing them to recognize the existence of the soul.

The Soul of Technology

My father, once holder of an open fascination with Darth Vader as the ultimate integration of man and machine, for many years sought to keep me focused on technology by disputing the validity of my spiritual experience. He’s mellowing in the last few months of his life, and we’ve had some great conversations. Sunday afternoon’s brought us around to Elon Musk’s ambition to terraform Mars. He asked my opinion of the idea, and I said that I felt a certain sympathy for Mr. Musk. I countered the claim that we needed an escape route from the mess that we were making of Earth. We’re going to have to solve our problems here, and when we do, the personality of Mr. Musk – from wherever it is at that point – is going to look back on this life and say “Wow. What a boondoggle that was! What a complete waste of my time!” He seems like a man with good intentions, and I’d just like for him to be able to look back and be proud of what he has accomplished.

When I was blogging out at Gaia, one of the most persistent voices in the “Question of the Day” group was a Kiwi nearing the end of his life. Every question produced a number of lengthy posts on the same topic: the necessity of investment in digital technologies that would allow us to monitor everything, and then to link the information to a master control system that would ensure the well-being of everyone on earth. When pressed, he claimed that this was important to him because if it didn’t happen really soon, he knew that he wouldn’t be able to live forever. I offered him the observation that he seemed to need God so deeply that he believe that mankind must create him.

The protagonist in both Ma and Golem is an alien named Corin Taphinal, come to Earth to try to protect life from destruction at humanity’s hands. He describes the situation this way:

The digital technology of [Earth’s] civilization had fascinated him. It was based upon the conversion of the most mystically inert substance in the universe – amorphous silicon – into precisely contaminated crystals. Its proponents spoke of blanketing the globe in digital sensors, constructing communications networks and data centers to aggregate the data, and the development of expert systems algorithms to assure the stability of human communities in the face of massive ecosystem disruption.

Why, in the name of all that was sacred, would anyone choose such methods? Over billions of years, the insinuation of Life into any planet’s surface established a far more sensitive and detailed sensory apparatus, supported by the most widely and freely distributed source of energy available, with representatives far better adapted to local conditions than people.

With this background, you might ask, “Why, Brian, do you work in technology?” Is it just to pay the bills?

I’ll protest my own rhetoric: that’s just going too far. Just because I don’t believe that technology is the ultimate solution to our problems doesn’t mean that I don’t find merit in its pursuit.

First, the world is an unstable place. I’m not just talking about natural disasters: for large parts of the year, seasonal variation makes life pretty tough for most animals. Technology stabilizes local conditions, allowing us to focus on developing our personalities. I appreciate that I don’t have to think full-time about weather, but can rely upon sensors and actuators controlled by computers to do it for me. That our solutions are making the challenge more difficult (global climate change) doesn’t mean that the technology isn’t valuable. The problem is that most of us, rather than developing our personalities, use our freedom from existential threat to indulge our procreative urges.

The solution to that is education. While knowledge is dangerous (life is incredibly vulnerable in engineering terms), I believe that understanding empowers us to make far better choices. We know that when the value of a woman’s mind has been affirmed through education they become pretty determined to limit the number of their children. The response of traditionalists has been to beat women down with fear. In that case, the best means of breaking down the rationale of political demagogues is disintermediation: bringing people together to demonstrate that the “enemy” is a lot like us. Communications technology addresses both of these problems, providing open access to knowledge in the privacy of the home and bridging the distance that separates us.

And finally – motivating my particular fascination with programming – software rescues philosophy from academic obscurity. The purpose of philosophy is to strengthen our ability to describe experience and thus to negotiate solutions. Through linkage to our financial and industrial infrastructure, software allows us almost instantly to express the solutions we negotiate. That is not just a one-off experience when (as in object-oriented design or COBOL) the software is defined using terms understood in the application domain. These act as sign-posts for the maintenance developer given the task of implementing new requirements.

I spoke, however, of rescuing philosophy, and I mean that. Software encodes philosophy, not as a book on a shelf, but as an agent for delivering solutions to the philosopher’s constituency. With the Affordable Health Care Act, software allowed us to implement social programs, assess their effectiveness, and adjust the rules to achieve better results. This is a demanding test of our philosophy, both as regards the degree in which they reflect the truth, and its value in organizing the use of our intelligence when conditions change.

As I have offered before (see The Trust Mind), I believe that eventually we will be freed from the material infrastructure we use to distribute power. However, as I see the long period from the Covenant of the Flood (in which humanity was authorized to create Law) to Jesus as an exercise in demonstrating the fallibility of fixed systems of rules, so I see this era (as articulated by Jeremy Rifkin in The Empathic Civilization) as a proving ground for our compassion. As technology accelerates the pace of change and resources become more and more scarce, only ideas of real merit will survive. Every thinking being will be confronted with the necessity of disciplining his thoughts.

While the demagogues continue to rant and rave on television, conditions are evolving under which every individual will find such blathering contradicted by direct personal experience. Then we will progress beyond the “birthing pains” mentioned by Jesus into the full flowering of the influence of Christ in our lives. When our ideas are angelic, they will be received and implemented by angels. Life will be vastly different then, and our digital infrastructure, with all its energetic excess, will largely fall away.

I see my work as intimately connected to the manifestation of that future. My work in motion control creates systems that relieve people of drudgery, thus liberating their energies for mindful and compassionate engagement with the world around them. My work in as a software developer builds discipline that is essential in organizing and propagating ideas that I believe are of merit. It’s not enough that those ideas are clever – they actually have to work.

Artificers of Intelligence

The chess program on a cell phone can beat all but the best human players in the world. It does this by considering every possible move on the board, looking forward perhaps seven to ten turns. Using the balance of pieces on the board, the algorithm works back to the move most likely to yield an advantage as the game develops.

These algorithms are hugely expensive in energetic terms. The human brain solves the same problem in a far more efficient fashion. A human chess player understands that there are certain combinations of pieces that provide leverage over the opposing forces. As opportunities arise to create those configurations, they focus their attention on those pieces, largely ignoring the rest of the board. That means that the human player considers only a small sub-set of the moves considered by the average chess program.

This advantage is the target of recent research using computerized neural networks. A neural net is inspired by the structure of the human brain itself. Each digital “node” is a type of artificial neuron. The nodes are arranged in ranks. Each node receives input values from the nodes in the prior rank, and generates a signal to be processed by the neurons in the next rank. This models the web of dendrites used by a human neuron to receive stimulus and the axon by which it transmits the signal to the dendrites of other neurons.

In the case of the human neuron, activation of the synapse (the gap separating axon and dendrite) causes it to become more sensitive, particularly when that action is reinforced by positive signals from the rest of the body (increased energy and nutrients). In the computerized neural network, a mathematical formula is used to calculate the strength of the signal produced by a neuron. The effect of the received signals and the strength of the generated signal is controlled by parameters – often simple scaling factors – that can be adjusted, node by node, to tune the behavior of the network.

To train an artificial neural network, we proceed much as we would with a human child. We provide them experiences (a configuration of pieces on a chess board) and give feedback (a type of grade on the test) that evaluates their moves. For human players, that experience often comes from actual matches. To train a computerized neural network, many researchers draw upon the large databases of game play that have been established for study by human players. The encoding of the piece positions is provided to the network as “sensory input” (much as our eyes do when looking at a chess board), and the output is the new configuration. Using an evaluative function to determine the strength of each final position, the training program adjusts the scaling factors until the desired result (“winning the game”) is achieved as “often as possible.”

In the final configuration, the computerized neural network is far more efficient than its brute-force predecessors. But consider what is going on here: the energetic expenditure has merely been front-loaded. It took an enormous amount of energy to create the database used for the training, and to conduct the training itself. Furthermore, the training is not done just once, because a neural network that is too large does not stabilize its output (too much flexibility) and a network that is too small cannot span the possibilities of the game. Finding a successful network design is a process of trial-and-error controlled by human researchers, and until they get the design right, the training must be performed again and again on each iteration of the network.

But note that human chess experts engage in similar strategies. Sitting down at a chess board, the starting position allows an enormous number of possibilities, too many to contemplate. What happens is that the first few moves determine an “opening” that may run to ten or twenty moves performed almost by rote. These openings are studied and committed to memory by master players. They represent the aggregate wisdom of centuries of chess players about how to avoid crashing and burning early in the game. At the end of the game, when the pieces are whittled down, players employ “closings”, techniques for achieving checkmate that can be committed to memory. It is only in the middle of the game, in the actual cut-and-thrust of conflict, that much creative thinking is done.

So which of the “brains” is more intelligent: the computer network or the human brain? When my son was building a chess program in high school, I was impressed by the board and piece designs that he put together. They made playing the game more engaging. I began thinking that a freemium play strategy would be to add animations to the pieces. But what about if the players were able to change the rules themselves? For example, allow the queen to move as a knight for one turn. Or modify the game board itself: select a square and modify it to allow passage only on the diagonal or in one direction. I would assert that a human player would find this to be a real creative stimulus, while the neural network would just collapse in confusion. The training set didn’t include configurations with three knights on the board, or restrictions on moves.

This was the point I made when considering the mental faculties out at http://www.everdeepening.org. Intelligence is not determined by our ability to succeed under systems of fixed rules. Intelligence is the measure of our ability to adapt our behaviors when the rules change. In the case of the human mind, we recruit additional neurons to the problem. This is evident in the brains of blind people, in which the neurons of the visual cortex are repurposed for processing of other sensory input (touch, hearing and smell), allowing the blind to become far more “intelligent” decision makers when outcomes are determined by those qualities of our experience.

This discussion, involving a game without much concrete consequence, appears to be largely academic. But there have been situations in which this limitation of artificial intelligence have been enormously destructive. It turns out that the targeting systems of drones employ neural networks trained against radar and visual observations of friendly and enemy aircraft. Those drones have misidentified friendly aircraft in live-fire incidents, firing their air-to-air missile and destroying the target.

So proclamations by some that we are on the cusp of true artificial intelligence are, in my mind, a little overblown. What we are near is a shift in the power allocated to machines that operate with a fixed set of rules, away from biological mechanisms that adapt their thinking when they encounter unexpected conditions. That balance must be carefully managed, lest we find ourselves without the power to adapt.

Staying Cool with R

Before returning to the control industry in 2008, I was employed in business systems development. My employer was hot to get in on the off-shore gambling business, but was kind enough to ask me what I was interested in. I offered my concern that people were overwhelmed with the demands imposed by 24/7 communications, to the point that their ability to actually immerse themselves in the experience of the moment was degrading. I thought that a system that guided them through reflection and looked for correlations between mood and experience might be the basis for helping them find people and places that would allow them to express their talents and find joy.

His reaction was to try to stake me at the gambling tables in Reno.

But he did recognize that I was motivated by a deep caring for people. That’s lead me into other directions in the interim. I’ve been trying to moderate the harsh tone in the dialog between scientists and mystics. I’ve accomplished about as much as I can – the resolution I have to offer is laid out in several places. I just need to let the target audience find the message.

So I’ve turned back to that vision. A lot has changed in the interim, most importantly being the unification of the Windows platform. This means that I can try to demonstrate the ideas in a single technology space. There’s only so many minutes in the day, after all.

I began with a review of statistical analysis. I’ve got a pair of books, bought back when I was a member of the Science Book of the Month club, on analysis of messy data. That provided me with the mathematical background to make sense of Robert Kabacoff’s R in Action. However it’s one thing to do analysis on the toy data sets that come with the R libraries. Real data always has its own character, and requires a great deal of curation. It would be nice to have some to play with.

One approach would be to begin digging into Bayesian language net theory and researching psychological assessment engines in preparation for building a prototype that I could use on my own. But I already have a pretty evolved sense of myself – I don’t think that I’d really push the engine. And I would really like to play with the Universal applications framework that Microsoft has developed. On top of that, the availability of an IoT (internet of things) build of Windows 10 for Raspberry Pi means that I can build a sensor network without having to learn another development environment.

So that plan is to deploy temperature and humidity sensors in my apartment. It’s a three-floor layout with a loft on the top floor. The middle floor contains a combination living/dining area and the kitchen. Both the loft and the kitchen have large sliders facing west, which means that they bake in the afternoon. On the bottom floor, the landing opens on one side to the garage and one the other side to my bedroom. The bedroom faces east behind two large canopies, although the willow tree allows a fair amount of light through. There’s a single thermostat on the middle floor. So it’s an interesting environment, with complicated characteristics.

While thermal balance also involves the state of windows, doors and appliances, I think that I can get a pretty good sense of those other elements by monitoring the air that flows around them. Being a hot yoga masochist, I’m also curious regarding the effect of humidity.

So I’ve got a Raspberry Pi on the way, and have installed Microsoft’s Visual Studio Community on my Surface Pro. Combination temperature and humidity sensors cost about ten dollars. While real-time data would be nice, I don’t think that for the purposes of my study I’ll need to link to the Wi-Fi to push the data out to a cloud server. I can use my laptop to upload it when I get home each day. And there’s some work to do in R: the time series analysis includes seasonal variations on annual trends, and I certainly expect my measurements to show that, but there will also be important diurnal variations. Finally, the activation of temperature control appliances (air conditioner and furnace) needs to be correlated with the data. I don’t want to invest in a Nest thermostat, or figure out how to get access to the data, so I’m going to see if I can use Cortana to post notes to my calendar (“Cortana – I just set the air conditioning to 74 degrees”).

Obviously there’s a lot to learn here. But no single piece is overwhelming until I get to the data analysis. Just cobbling together of small pieces. Should be fun! And if I can figure out how to manage my windows and doors and appliances to reduce my energy expenditures – well, that would be an interesting accomplishment.