Is there such a thing as an objective basis of morality? For some time, in secular circles, the idea has seemed absurd. Morality is what we choose it to be. We are free to do what we like so long as we don’t harm others.
Moral judgments are not truths but choices. There is no way of getting from “is” to “ought”, from description to prescription, from facts to values, from science to ethics. This was the received wisdom in philosophy for a century after Nietzsche had argued for the abandonment of morality – which he saw as the product of Judaism – in favour of the “will to power”.
Recently, however, an entirely new scientific basis has been given to morality from two surprising directions: neo-Darwinism and the branch of mathematics known as Games Theory. As we will see, the discovery is intimately related to the story of Noach and the covenant made between God and humanity after the Flood.
Games theory was invented by one of the most brilliant minds of the 20th century, John von Neumann (1903-1957). He realised that the mathematical models used in economics were unrealistic and did not mirror the way decisions are made in the real world. Rational choice is not simply a matter of weighing alternatives and deciding between them. The reason is that the outcome of our decision often depends on how other people react to it, and usually we cannot know this in advance. Games theory, von Neumann’s invention in 1944, was an attempt to produce a mathematical representation of choice under conditions of uncertainty. Six years later, it yielded its most famous paradox, known as the Prisoner’s Dilemma.
Imagine two people, arrested by the police under suspicion of committing a crime. There is insufficient evidence to convict them on a serious charge; there is only enough to convict them of a lesser offence. The police decide to encourage each to inform against the other. They separate them and make each the following proposal: if you testify against the other suspect, you will go free, and he will be imprisoned for ten years. If he testifies against you, and you stay silent, you will be sentenced to ten years in prison, and he will go free. If you both testify against one another, you will each receive a five-year sentence. If both of you stay silent, you will each be convicted of the lesser charge and face a one-year sentence.
It doesn’t take long to work out that the optimal strategy for each is to inform against the other. The result is that each will be imprisoned for five years. The paradox is that the best outcome would be for both to remain silent. They would then only face one year in prison. The reason that neither will opt for this strategy is that it depends on collaboration. However, since each is unable to know what the other is doing – there is no communication between them – they cannot take the risk of staying silent. The Prisoner’s Dilemma is remarkable because it shows that two people, both acting rationally, will produce a result that is bad for both of them.Eventually, a solution was discovered. The reason for the paradox is that the two prisoners find themselves in this situation only once. If it happened repeatedly, they would eventually discover that the best thing to do is to trust one another and co-operate.
In the meantime, biologists were wrestling with a phenomenon that puzzled Darwin. The theory of natural selection – popularly known as the survival of the fittest – suggests that the most ruthless individuals in any population will survive and hand their genes on to the next generation. Yet almost every society ever observed values individuals who are altruistic: who sacrifice their own advantage to help others. There seems to be a direct contradiction between these two facts.
The Prisoner’s Dilemma suggested an answer. Individual self-interest often produces bad results. Any group which learns to cooperate, instead of compete, will be at an advantage relative to others. But, as the Prisoner’ Dilemma showed, this needs repeated encounters – the so-called “Iterated (= repeated) Prisoner’s dilemma”. In the late 1970s, a competition was announced to find the computer program that did best at playing the Iterated Prisoner’s Dilemma against itself and other opponents.
The winning programme was devised by a Canadian, Anatole Rapoport, and was called Tit-for-Tat. It was dazzlingly simple: it began by co-operating, and then repeated the last move of its opponent. It worked on the rule of “What you did to me, I will do to you”, or “measure for measure”. This was the first time scientific proof had been given for any moral principle.
What is fascinating about this chain of discoveries is that it precisely mirrors the central principle of the covenant God made with Noah:
Whoever sheds the blood of man,
by man shall his blood be shed;
for in the image of God
has God made man.
This is measure for measure [in Hebrew, middah keneged middah], or retributive justice: As you do, so shall you be done to. In fact, at this point the Torah does something very subtle. The six words in which the principle is stated are a mirror image of one another: [1] Who sheds [2] the blood [3] of man, [3a] by man [2a] shall his blood [1a] be shed. This is a perfect example of style reflecting substance: what is done to us is a mirror image of what we do. The extraordinary fact is that the first moral principle set out in the Torah is also the first moral principle ever to be scientifically demonstrated. Tit-for-Tat is the computer equivalent of (retributive) justice:
Whoever sheds the blood of man, by man shall his blood be shed.
The story has a sequel. In 1989, the Polish mathematician Martin Nowak produced a programme that beats Tit-for-Tat. He called it Generous. It overcame one weakness of Tit-for-Tat, namely that when you meet a particularly nasty opponent, you get drawn into a potentially endless and destructive cycle of retaliation, which is bad for both sides. Generous avoided this by randomly but periodically forgetting the last move of its opponent, thus allowing the relationship to begin again. What Nowak had produced, in fact, was a computer simulation of forgiveness.
Once again, the connection with the story of Noach and the Flood is direct. After the Flood, God vowed: “I will never again curse the ground for man’s sake, although the imagination of man’s heart is evil from his youth; nor will I again destroy every living thing as I have done.” This is the principle of Divine forgiveness.
Thus the two great principles of the Noachide covenant are also the first two principles to have been established by computer simulation. There is an objective basis for morality after all. It rests on two key ideas: justice and forgiveness, or what the Sages called middat ha-din and middat rachamim. Without these, no group can survive in the long run.
In one of the first great works of Jewish philosophy – Sefer Emunot ve-Deot (The Book of Beliefs and Opinions) – R. Saadia Gaon (882-942) explained that the truths of the Torah could be established by reason. Why then was revelation necessary? Because it takes humanity time to arrive at truth, and there are many slips and pitfalls along the way.
It took more than a thousand years after R. Saadia Gaon for humanity to demonstrate the fundamental moral truths that lie at the basis of God’s covenant with humankind: that co-operation is as necessary as competition, that co-operation depends on trust, that trust requires justice, and that justice itself is incomplete without forgiveness. Morality is not simply what we choose it to be. It is part of the basic fabric of the universe, revealed to us by the universe’s Creator, long ago.
True Morality
Share
Is there such a thing as an objective basis of morality? For some time, in secular circles, the idea has seemed absurd. Morality is what we choose it to be. We are free to do what we like so long as we don’t harm others.
Moral judgments are not truths but choices. There is no way of getting from “is” to “ought”, from description to prescription, from facts to values, from science to ethics. This was the received wisdom in philosophy for a century after Nietzsche had argued for the abandonment of morality – which he saw as the product of Judaism – in favour of the “will to power”.
Recently, however, an entirely new scientific basis has been given to morality from two surprising directions: neo-Darwinism and the branch of mathematics known as Games Theory. As we will see, the discovery is intimately related to the story of Noach and the covenant made between God and humanity after the Flood.
Games theory was invented by one of the most brilliant minds of the 20th century, John von Neumann (1903-1957). He realised that the mathematical models used in economics were unrealistic and did not mirror the way decisions are made in the real world. Rational choice is not simply a matter of weighing alternatives and deciding between them. The reason is that the outcome of our decision often depends on how other people react to it, and usually we cannot know this in advance. Games theory, von Neumann’s invention in 1944, was an attempt to produce a mathematical representation of choice under conditions of uncertainty. Six years later, it yielded its most famous paradox, known as the Prisoner’s Dilemma.
Imagine two people, arrested by the police under suspicion of committing a crime. There is insufficient evidence to convict them on a serious charge; there is only enough to convict them of a lesser offence. The police decide to encourage each to inform against the other. They separate them and make each the following proposal: if you testify against the other suspect, you will go free, and he will be imprisoned for ten years. If he testifies against you, and you stay silent, you will be sentenced to ten years in prison, and he will go free. If you both testify against one another, you will each receive a five-year sentence. If both of you stay silent, you will each be convicted of the lesser charge and face a one-year sentence.
It doesn’t take long to work out that the optimal strategy for each is to inform against the other. The result is that each will be imprisoned for five years. The paradox is that the best outcome would be for both to remain silent. They would then only face one year in prison. The reason that neither will opt for this strategy is that it depends on collaboration. However, since each is unable to know what the other is doing – there is no communication between them – they cannot take the risk of staying silent. The Prisoner’s Dilemma is remarkable because it shows that two people, both acting rationally, will produce a result that is bad for both of them.Eventually, a solution was discovered. The reason for the paradox is that the two prisoners find themselves in this situation only once. If it happened repeatedly, they would eventually discover that the best thing to do is to trust one another and co-operate.
In the meantime, biologists were wrestling with a phenomenon that puzzled Darwin. The theory of natural selection – popularly known as the survival of the fittest – suggests that the most ruthless individuals in any population will survive and hand their genes on to the next generation. Yet almost every society ever observed values individuals who are altruistic: who sacrifice their own advantage to help others. There seems to be a direct contradiction between these two facts.
The Prisoner’s Dilemma suggested an answer. Individual self-interest often produces bad results. Any group which learns to cooperate, instead of compete, will be at an advantage relative to others. But, as the Prisoner’ Dilemma showed, this needs repeated encounters – the so-called “Iterated (= repeated) Prisoner’s dilemma”. In the late 1970s, a competition was announced to find the computer program that did best at playing the Iterated Prisoner’s Dilemma against itself and other opponents.
The winning programme was devised by a Canadian, Anatole Rapoport, and was called Tit-for-Tat. It was dazzlingly simple: it began by co-operating, and then repeated the last move of its opponent. It worked on the rule of “What you did to me, I will do to you”, or “measure for measure”. This was the first time scientific proof had been given for any moral principle.
What is fascinating about this chain of discoveries is that it precisely mirrors the central principle of the covenant God made with Noah:
This is measure for measure [in Hebrew, middah keneged middah], or retributive justice: As you do, so shall you be done to. In fact, at this point the Torah does something very subtle. The six words in which the principle is stated are a mirror image of one another: [1] Who sheds [2] the blood [3] of man, [3a] by man [2a] shall his blood [1a] be shed. This is a perfect example of style reflecting substance: what is done to us is a mirror image of what we do. The extraordinary fact is that the first moral principle set out in the Torah is also the first moral principle ever to be scientifically demonstrated. Tit-for-Tat is the computer equivalent of (retributive) justice:
The story has a sequel. In 1989, the Polish mathematician Martin Nowak produced a programme that beats Tit-for-Tat. He called it Generous. It overcame one weakness of Tit-for-Tat, namely that when you meet a particularly nasty opponent, you get drawn into a potentially endless and destructive cycle of retaliation, which is bad for both sides. Generous avoided this by randomly but periodically forgetting the last move of its opponent, thus allowing the relationship to begin again. What Nowak had produced, in fact, was a computer simulation of forgiveness.
Once again, the connection with the story of Noach and the Flood is direct. After the Flood, God vowed: “I will never again curse the ground for man’s sake, although the imagination of man’s heart is evil from his youth; nor will I again destroy every living thing as I have done.” This is the principle of Divine forgiveness.
Thus the two great principles of the Noachide covenant are also the first two principles to have been established by computer simulation. There is an objective basis for morality after all. It rests on two key ideas: justice and forgiveness, or what the Sages called middat ha-din and middat rachamim. Without these, no group can survive in the long run.
In one of the first great works of Jewish philosophy – Sefer Emunot ve-Deot (The Book of Beliefs and Opinions) – R. Saadia Gaon (882-942) explained that the truths of the Torah could be established by reason. Why then was revelation necessary? Because it takes humanity time to arrive at truth, and there are many slips and pitfalls along the way.
It took more than a thousand years after R. Saadia Gaon for humanity to demonstrate the fundamental moral truths that lie at the basis of God’s covenant with humankind: that co-operation is as necessary as competition, that co-operation depends on trust, that trust requires justice, and that justice itself is incomplete without forgiveness. Morality is not simply what we choose it to be. It is part of the basic fabric of the universe, revealed to us by the universe’s Creator, long ago.
Maurice was a visionary philanthropist. Vivienne was a woman of the deepest humility.
Together, they were a unique partnership of dedication and grace, for whom living was giving.
A Living Book
< PreviousOur Children Walk on Ahead
Next >More on Noach
Individual and Collective Responsibility
A Tale of Four Cities
The Courage to Live with Uncertainty
Beyond Nature
Righteousness is not Leadership
The Light in the Ark
A Drama in Four Acts
The Trace of God
A Story of Heaven and Earth
Hero or Zero?