A Proposal for Full Unemployment

As corporations have now achieved personhood, we advance with trepidation towards a future in which the needs of our artificial constructs take priority in our economy. Embracing my demise as an economic agent, in a flash of insight I realized that acceptance of the personhood of machines is the path to human freedom! Robots have no self interest, but if recognized as independent economic agents can generate cash flow in a closed system of production (recycling becoming the source of raw materials to factories). Tax revenues on our robot citizens will usher in a Leisure Age for unemployed humans!
the-new-economy
Best of all, in the event of tax revenue shortfall, we can always increase the size of the robot economy by one of three means:

  • Increase the number of robot workers and renewable energy systems.
  • Increase robot wages.
  • Move production up-market.

In the Matrix movies, humanity was oppressed by machine overlords. Recognizing the complete reversal of circumstances in this achievable future of thought free from worry, I dub this economic system “The Mentrix.”

Pronounce it like you’re from Brooklyn.

Robin Hood Goes Digital

Kaspersky Labs, the digital security company, has reported that the technology used to attack Iran’s uranium production system (“Stuxnet”) has made its way into the banking system. The malware is hard to detect because it does not run from files – it exists only in memory, being passed from machine to machine over internal networks.

My comment is a meditation on the inevitability of this in exploitative corporate cultures.


Azethoth666 wrote:

>> Or perhaps its just taking them time to get around to everyone manually?

Considering the corporate culture of the American banking system, this seems highly likely. The post-mortem on the Wells Fargo account creation fraud was that management propagated unreasonable performance requirements, with the result that only fraudulent conduct by employees would produce the desire results. However, the executives, some of whom were ousted with huge bonuses, did not make the decision to commit fraud. They were protected from direct involvement by the decision made by employees fearing for their livelihoods.

That situation is ripe for exploitation by criminal elements, and in fact employees caught in that system would be likely to take a “Robin Hood” attitude to their compromise of corporate security.

Tech Anti-Experience

Jon Evans at TechCrunch offers an analysis of the value of experience in software development. My reply:


From the time that the electric motor was offered as a flexible method for distributing mechanical power, it was nearly 40 years before belt-driven machines were replaced. According to historians, the controlling factor was establishing a community of technologists that understood how to leverage that flexibility.
Information technology has upended this constraint. Differentiating technologies are created and distributed to self-organizing tribes that use their specialization to gain competitive advantage over established peers in months.
This is fed by the industry’s rock-star economics: nobody knows where the next Apple or Facebook or Twitter is going to arise, so they find the most exploitable source of labor (young men with raging hormones) and set them to churning out code on unreasonable schedules.
The more thoughtful among “experienced” developers might look at the proliferation of technologies and draw parallels to youth counter-culture and slang in society at large. That would be to ignore the mature centers of the industry that serve to continuously remove impediments to the creation of distributed solutions. Younger developers have fewer things to think about – at least until the infrastructure begins to creak under the burden of their unanticipated success. Today, cloud-based SaaS and PaaS is eroding even that bulwark for experience.
So the author is correct, although maybe for the wrong reasons. My fear is that the real reasons also erode the value of the advice in the last two paragraphs. With the disruption of the “old boy’s network” by the world-wide-web, trying to keep up with technology may be a fool’s errand.

Intelligence and Creativity

Joseph at Rationalizing the Universe explores the modern formulation, rooted in Goedel’s Theorem of logical incompleteness, of living with uncertainty. The following discussion ensued:


Brian

You point out correctly that Goedel’s theorem is restricted to a fairly narrow problem: proving that a proof system is “correct” – i.e. – that its axioms and operations are consistent. In other words, we can’t take a set of axioms and apply the operations to disprove any other axiom.

This seems to lead to the conclusion that we can’t trust our proofs of anything, which means that there are no guarantees that our expectations will be met. Unfortunately, expectations are undermined by many other problems, among them determination of initial conditions, noise and adaptation. The last is the special bete noir of sociology, as people will often violate social norms in order to assuage primitive drives.

At this point in my life, I am actually not at all troubled by these problems. Satisfaction is not found in knowing the truth, it is found in realizing creative possibilities. If we could use mathematics to optimize the outcome of social and economic systems, we would have no choices left. Life would become terribly boring. So what is interesting to me is to apply understanding of the world to imagine new possibilities. Mathematics is a useful tool in that process, particularly when dealing with dumb matter.

This brings me back to the beginning of the post: you state that “mathematics is the unspoken language of nature.” If there is anything that Goedel’s theorem disproves, it is precisely that statement. Mathematics is a tool, just as poetry and music are tools. At times, both of the latter have transported my mind to unseen vistas; mathematics has never had that effect.


Joseph

You raise a very interesting point; if we could optimise everything then would we take all of the joy out of being…. you may well be right. I know I get a lot of my satisfaction from the quest to know more. Although I disagree that Godel’s theorems disprove my original statement in this sense; language is essentially about describing things. That is why you can have different languages but they are easily translatable…. bread/pan/brot etc…. we all know what they mean because they all describe the same thing. In exactly the same way, mathematics describes things that actually exist; that isn’t to say nature is mathematics at all – mathematics is the language of nature but it is just as human in its construction as the spoken word. But is matter not matter because a human invented the label? Matter is matter.

To be, these theorems don’t break down all of our proofs; but what they do show is a vital point about logic. One which I think is going to become and increasingly big issue as the quest to understand and build artificial intelligence increases – can we every build a mind as intelligent as a humans when a human can know the answer to a non-programmable result? We hope so! Or rather I do – I do appreciate it’s not for everyone


Brian

I appreciate your enthusiasm, but I must caution that the mathematical analogies in classical physics cannot be extended in the same way to the quantum realm. Richard Feynman warned us that there is no coherent philosophy of quantum mechanics – it is just a mathematical formulation that produces accurate predictions. Ascribing physical analogies to the elements of the formulation has always caused confusion. An extreme example was found in the procedure of renormalization, in which observable physical properties such as mass and charge are produced as the finite ratio of divergent integrals.

Regarding human and digital intelligence: one of the desirable characteristics of digital electronics is its determinism. The behavior of transistor gates is rigidly predictable, as is the timing of clock signals that controls that propagation of signals through logic arrays. This makes the technology a powerful tool to us in implementing our intentions.

But true creativity does not arise from personal control, which only makes me loom bigger in the horizon of others’ lives, threatening (as the internet troll or Facebook post-oholic) to erase their sense of self. Rather, creativity in its deepest sense arises in relation, in the consensual intermingling of my uniqueness with the uniqueness of others.

Is that “intelligence?” Perhaps not – the concept itself is difficult to define, and I believe that it arises as a synthesis of more primitive mental capacities, just as consciousness does. But I doubt very much that Artificial Intelligence is capable of manifestations of creativity, because fundamentally it has no desires. It is a made thing, not a thing that has evolved out of a struggle, spanning billions of years, for realization. Our creativity arises out of factors over which we have no control: meeting a spouse-to-be, witnessing an accident, or suffering a debilitating disease. We have complex and subtle biochemical feedback systems which evolved to recognize and adjust to the opportunities and imperatives of living. We are a long way from being able to recreate that subtlety in digital form, and without those signals, meaningful relation cannot evolve, and thus creativity is still-born.

Internet Autocracy

Article at The Conversation on the internet as a centralized form of media that can be exploited by authoritarian regimes, particularly among citizens using it primarily for entertainment.

My comment:


I believe that the jury is still out on this one. One of the factors that fueled international respect for authoritarian regimes was external propaganda. Leaders of developing nations were beguiled by the perception that state-run economies and militaries were equally effective as those managed by decentralized cultures. The internet completely skewers that façade.

Most authoritarian regimes are sustained by revenues obtained through labor and resource exploitation by the developed world. As the consumer nations shift to automated and sustainable alternatives (respectively), those revenues will dry up. The old Roman dictum “bread and circuses” fails when there is no bread. When there is no bread, people will be forced to organize in a decentralized fashion to obtain basic goods. The internet will be the mechanism that facilitates that organization.

And there is still the lesson of the Cold War: if the international community can avoid creating external conflicts to justify the fear-mongering, the investment in lies eventually divorces the leadership from reality. The internet only provides the appearance of greater efficacy. The people learn to go about their business independently by pushing responsibility upwards. The retort is always “I’ll do it, boss, if you show me exactly how.” It’s like that scene in Life of Brian where the two prison guards stutter and garble words until the interrogator leaves, then start speaking coherently.

In the meanwhile, liberal societies will rocket ahead using the benefits of network effects (the value of a communications network goes up as the factorial of the number of participants). In the era of rapid change driven by global climate stress, that facility will be essential to survival.

Google-Plex

When my sons learned about exponents in elementary school, they came home chattering about googol – which is 10100, or ‘1’ followed by 100 zeros. This is an impressively large number: physicists estimate that the entire universe only contains enough matter to make 1093 hydrogen atoms. However, as with their nine-year old peer that invented the terminology, the next step was even more exciting: the googolplex, or ‘1’ followed by a googol of zeros. When challenged to exhibit the number, the originator adopted a more flexible definition, purportedly:

‘1’ followed by writing zeroes until you get tired

One of the most attractive aspects of science and engineering research is just such child-like “what comes next?” The tendency is to think in exponential terms, although not always in powers of ten. Moore’s Law in semiconductors held that microprocessors should double in power every 18 months, and inspired a generation of chip designers, so that now our cell phones contain more processing power than the super-computers of the ’70s. Digital storage systems have improved even faster, allowing us to indulge the narcissism of the “selfie” culture: every day we store 50 times as much data in the cloud than is required to encode the entire Library of Congress.

The problem is when what appears to be the next obvious step in physics and engineering becomes decoupled from social need. We spend billions of dollars each year launching satellites to probe the structure of the cosmos, and even further billions providing power and material to the facilities that probe the matter that surrounds us. “Super-high” skyscrapers are pushing towards a kilometer in height, led by Saudi Arabia and Dubai, who appear to be engaged in a penis-envy contest that seems tolerable mostly because it side-steps the threat of mass extinction posed by the last such contest (the nuclear arms race between the US and the USSR).

In the case of software development, the “what next” syndrome is particularly seductive. It used to be that we’d have to go to a store and buy a box to get a better operating system, word processor or tax preparation package. Now we purchase them online and download them over the internet. This means that a software developer with a “better idea” can push it out to the world almost immediately. Sometimes the rest of us wish that wasn’t so – we call those experiences “zero day exploits,” and they cost us collectively tens of billions of dollars to defend against and clean up afterwards.

But most of the time, we benefit from that capability, particularly as artificial intelligence improves. We already have medical diagnostic software that does better than most physicians at matching symptoms to diseases. Existing collision avoidance algorithms allow us to sleep behind the wheel as long as a lane change isn’t required, and self-driving cars are only a few years away from wide-spread use. Credit card transaction monitoring protects us from fraud. These all function as well as they do because the rules described in software don’t disappear when the machine turns off. While the understanding encoded in the neural pathways of a Nobel Laureate vanishes upon death, software algorithms are explicitly described in a form that can be analyzed long after the originator retires. The lineage of successors can therefore improve endlessly upon the original.

The combination of robust, long-lived memory and algorithms means that the software industry believes that it is curating the evolution of a new form of mind. Those developing that mind seek to extend its capabilities in both directions: recording everything that happens, and defining rules to control outcomes in every possible situation. Their ambition, in effect, is to create God.

Contemplating this future, I had a colleague at work effuse that he “looked forward” to Big Brother, believing that in an era in which everything was known and available online for discovery, people would think twice about doing wrong.

In response, I suggested that many religions teach us that such a being already exists, but has the wisdom to understand that confronting us with our failings is not always the most productive course. In part, that is because error is intrinsic to learning. While love binds us together in an intimacy that allows us to solve problems together that we could never solve alone, we’re still going to hurt each other in the process. Big Brother can’t decide who we should love, and neither can God: each of us is unique, and part of life’s great adventure is finding that place in which we create greatest value for the community we nurture.

Furthermore, Big Brother is still a set of algorithms under centralized control. In George Orwell’s 1984, the elite used its control of those algorithms to subjugate the world. By contrast, the mind of God is a spiritual democracy: we chose and are accepted only by reciprocal gestalts.

Finally, Big Brother can never empathize with us. It can monitor our environment and our actions, it can even monitor our physiological state. But it cannot know whether our response is appropriate to the circumstances. It cannot be that still, quiet voice in our ear when our passion threatens to run amok. Big Brother cannot help us to overcome our weakness with strength – it can only punish us when we fail.

So, you in the artificial intelligence community, if you believe that you can create a substitute for God from digital technology, you should recognize that the challenge has subtleties that go beyond omniscience and perfect judgment. It includes the opportunity to engage in loving co-creation, and so to enter into possibilities that we can’t imagine, and therefore that are guaranteed to break any system of fixed rules. Your machine, if required to serve in that role, will unavoidable manifest a “Google-plex,” short for a “Google complex.”