Yesterday I indicated that at some point in the nearby future business decisions will not only be helped by software programs but that the role of humans will change by leaving ever bigger business decisions to computer based programs.

Today, the reliance on software for all kinds of decision making in companies including servicing their customers is more wide spread and advanced then many realise. Steve Lohr in the NYTimes:

Trading stocks, targeting ads, steering political campaigns, arranging dates, besting people on “Jeopardy” and even choosing bra sizes: computer algorithms are doing all this work and more.

Not all of these examples are considered “management decisions” perse, like choosing bras size in most companies, but it is clear that many were considered the exclusive territory for managers/humans to decided until very recently.

Many advances are made as to have software “understand” what in a given situation and context is the right interpretation of the request by a customer/person/company and to act – make decisions – accordingly. Understanding natural language by a computer for example is helped much by knowing the context of what is said. Many years ago (20-25?) I met with a researcher whose life ambition it was to be able to use natural language as input for computers. He had come to the conclusion that it was vital to know the context. He was pretty much stuck for the computing power and other resources needed for this kind of effort was beyond even the wildest imagination of that day and age. We have come a long way since then, but we are not yet there!

Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.

The algorithms are getting better. But they cannot do it alone.

“You need judgement, and to be able to intuitively recognize the smaller sets of data that are most important,” Mr. Taylor said. “To do that, you need some level of human involvement.”

For now it is more efficient to have humans help make these judgements in some situations; in many situations human judgement has already been bested by computer decision making, or where the computer is the primary decision maker where a human is supervising with the ability to override (like a pilot in modern plane). But with the advances in computer power, communication technology and the build up of graph knowledge at an exponential rate, the necessity of humans helping to understand context will diminish.

Humans and computers are differing in many ways. But computers are getting better fast in various domains in what used to be the exclusive territories for humans but where their brains and self perceptions are posing all kinds of limitations. The abilities of our brains for “decision making” are mind boggling (pun intended), but in many circumstances a computer program can do it even better. The amount of circumstances will increase.

Update 3 march 2014:
via A computer made a math proof the size of Wikipedia, and humans can't check it | The Verge.

It raises the question of non-human mathematics: if a proof can only be checked with a computer, can it be accepted as true by humans? According to Gil Kalai of the Institute of Mathematics at the Hebrew University of Jerusalem in Israel, a human might not have to check it. He claims that if another computer can generate the same proof with the same results, it’s likely to be accurate.

via This machine kills trolls | The Verge.

How does automation affect the social interactions among Wikipedians? That’s the question Aaron Halfaker, the Wikimedia Foundation researcher, has been asking. Looking at anti-vandalism software such as Huggle and Cluebot, he says, "I see this amazing thing: it makes Wikipedia tractable." The long-conventional view of the site as a free-for-all palimpsest of anonymous scribblings — "anyone can edit" — becomes something much different. The tools that saved Wikipedia also altered it by adding a layer of gatekeepers.

Halfaker has examined how such gatekeepers affect new contributors. "When you show up at the edge of a community and you’re there to help, you expect your interactions will be with someone who at least has time to say hello," he says. "These tools aren’t really designed to do that. They’re designed to be efficient. They’re designed to do a job." They’re saving Wikipedia from vandalism, but doing nothing to welcome new users.