Category Archives: ethics

A Mathematical Proof that Immortality is Moral

(Disclaimer: this is meant as a joke, not as serious proof.)


1. There is at least a number N such that living for N years is moral.

2. If living for M years is moral, then so is living for M + 1 years.

From here we reason by induction:

3. Living for N years is moral  (because of (1)).

4. Living for N + 1 years is moral (because of (2)).

5. Living for (N + 1) + 1 years is moral (because of (2)).

6. Living for ((N + 1) + 1) + 1 years is moral (because of (2)).

7. Living for ((((….(N + 1) + 1) + 1)…) years is moral (because of (2) applied repeatedly).

Rearranging the terms from (7) we get:

8. Living for N + M years is moral, where M is any integer, no matter how large, and N is any arbitrary integer.

Therefore, it is moral to live for K years, where K is any integer which can be arbitrarily large. In other words, it is moral to have an arbitrarily large lifespan: for any number of years, it can be shown there is always a larger number of years for which living is moral.


Enhanced by Zemanta

Mammoths and Chickenosaurus

I just read this very interesting article by Leonard Finkelman in Massimo Pigliucci’s blog “Rationally Speaking.” It is about de-extinction: bringing species back to life. If you’ve followed the science news in the last two months you probably know what I’m talking about. Finkelman argues that:

1. De-extinction is not possible.

2. De-extinction should not be attempted.

I agree with both, but since the thought of dinosaurs and mammoths makes me giddy as a schoolboy I’ll cover my ears, sing “tralalalalala” and pray that nobody listens to his admonishments. I will now reproduce a very sketchy version of his arguments and, most importantly, a summary of how different attempts at de-extinction work.

Dodo, based on Roelant Savery's 1626 painting ...

Gone the way of the Dodo. (Photo credit: Wikipedia)

So, some background on de-extinction. First, mammoths: not long ago, some preserved mammoth blood was found in the Russian tundra. This, apart from being awesome, would give scientists enough DNA to attempt to clone one of these furry elephants. All they need to do is replace the nucleus of an elephant egg with a mammoth cell and work their magic. Of course, the “magic” is complex and involves several technical difficulties: for example, it is not enough to have a viable nucleus. We need a cell with the right chemical components to allow this nucleus to work. If you insert a mammoth cell nucleus into a chicken cell, you won’t get much. In this sense, real de-extinction is not possible, meaning that the cloned animal won’t be exactly a mammoth.

There is another, more technical sense in which Finkelman argues that de-extinction is not possible, but I don’t think it’s a sense any of us would care about, since it’s mainly semantics. At the end of the day, I am seeing a mammoth? (or something that is indistinguishable from a mammoth?). If the answer is yes, it’s good enough for me.

Now, chickenosaurus. I have to say this one really gets my mojo going. The idea is this: most evolutionary changes happen by activating or deactivating certain genes, which actually stay there. It is possible to reverse these changes by re-activating or de-activating those genes. So, for example, I can take a chicken and make it grow a tail and a pair of claws through a surprisingly simple process. I can keep doing that with other parts of the animal until I get something that looks just like a non-avian dinosaur. People are now attempting this and it is absolutely awesome. I don’t care if it’s a “real” non-avian dinosaur or not.

I want to see those chickens.

Artist's depiction of a Dilophosaurus wetherelli

Artist’s depiction of a Dilophosaurus wetherelli (Photo credit: Wikipedia)

So the question now is: should we do this? Should we spend millions of dollars in this kind of research? FInkelman argues we shouldn’t. For starters, bringing a species (that, he argues, is not really a species but just an individual that resembles a species that’s gone) back to life is costly, much more than actually not driving it to extinction in the first place. Second, the possibility of de-extinction may make people less careful about making species extinct. After all, we can just bring them back whenever we want! Third, the cost of bringing a minimum viable population size to life would be prohibitive so, in practice, we would just have a couple beasts for showing in zoos, not a brought-back-to-life species. Finally, shouldn’t all those millions of dollars be invested in something that’s actually useful?

I think he has a point. More than one, really. Very good ones, all of them. He is right. We shouldn’t do this. It’s pointless and expensive.

But, mammoths!


I mean… come on, man.

Enhanced by Zemanta

The Most Important Question

“The philosophers have only interpreted the world, in various ways; the point is to change it.” Karl Marx

“What to do?”

It could be argued that this is the only question that matters. We are left in this world without any guidance and told to do our best. But we don’t know what our best is, because we don’t know what is good or what is bad. And so we inquire, thinking that, with the next piece of information, things will finally fall into place, only to find that, no matter how long we search, we are just as far from answering the question: “what to do?”

But we have to do something. Time does not care about our doubt and we are forced to act, day after day, making the best of the information we have and hoping we won’t wake up one day, eighty years from now, and discover we went at it the wrong way.

This question, what to do, is the driving force behind most people’s lives. Some cling to an answer and embrace it, despite all evidence against it. This is understandable. Nobody wants to find out too late they were wrong. Some people decide to keep looking and make do with what they have. Some people stick to a philosophy, some make their own; some change every three months. At the end, even if we don’t know what to do, we all do something.

Karl Marx

Karl Marx

Wanting to know what to do is what drove me to study physics in the first place. “If I am to know what to do,” I thought, “I first need to know which kind of world I live in.”  When I know the what, I can start thinking about the why. I even suspected the why would be somehow included in the what, if I went deep enough. That was the plan, anyway.

Of course, I never found out the what or the why and even the truths I was most sure of crumbled under my feet. But that’s a different story for a different day.

The question “what to do” seems ill-posed. What to do in order to achieve what? “What to do” implies a sense of cosmic order, of good and evil. If we reject the existence of those, the question has to be refined. “What to do if I want to be happy?” “What to do if I want to be rich?” “What to do if I want to rule the world?” Those are answerable questions that possibly don’t require all the knowledge in the universe. Of course, a new question arises: “what to want?” Should I strive for world domination? For happiness? For a family and a job? Should I strive to change the political system in my country? But, again, “what to want” implies some wants are better than others. Better than others for what?

And here’s another possibility: the question is moot. Our choices are either determined or random: either way, we don’t have much choice. We will do what we will do, no matter what. Fretting over what to do implies we have some say on what it is we will actually do. But we don’t. If you’re worried that your actions will somehow go against the cosmic order, making the universe a worse place, don’t worry: by definition, they can’t.

And, either way, the universe doesn’t care.

This post is dedicated to my father, to whom I owe the first quote.

Enhanced by Zemanta

Compassion, Intelligence and Evolution

Today’s article will be highly speculative. Please don’t take it more seriously than it deserves.

I want to speak about compassion. By compassion I mean the ability to feel some other being’s pain. I say being, and not human being, because I want to venture a hypothesis that correlates compassion and intelligence. To do that, I have to look at compassion in animals.

There are different degrees of compassion. Most human beings feel compassion towards their children. A smaller subset feels compassion towards their parents. In decreasing order of frequency, human beings feel compassion towards their family, friends, reduced social group, extended social group, nation, continent and humanity as a whole.

Compassion is a fairly recent invention. For example, bacteria don’t feel compassion. They don’t feel much, in fact. Worms, fish and cephalopods also don’t seem to have much compassion either, not even towards their children. Reptiles in general don’t take care of their young: they lay their eggs and leave their offspring to fend for themselves. One may say they couldn’t care less.

Compassion personified: a statue at the Epcot ...

Compassion personified: a statue at the Epcot center in Florida (Photo credit: Wikipedia)

Only mammals and birds seem to feel some sort of compassion, though it is mostly confined to the family unit. Mammals and birds also have the biggest brain sizes in the animal kingdom. It is probably not a coincidence: feeling compassion requires the capacity to make simulations of another living thing. But let me elaborate, because I believe the simulation point to be important.

Most living beings are capable of making some type of simulation of their environment. That’s how we make decisions: we simulate possible outcomes based on our different courses of action and we choose the one that leads to the most pleasure and the least pain. At least, that’s the basic framework. Bacteria don’t have to simulate much: when their food detectors fire, they move towards the food. That’s pretty much it. But, as the complexity in situations increases, so does the need for more accurate simulations.

Any software engineer will tell you that simulating something inorganic is millions of times easier than simulating something organic. A rock’s trajectory is easy to calculate; a sparrow’s, not so much. The capability for simulating other living things, then, requires significant processing power. Since this capability is needed for compassion, it is not surprising that only animals with highly developed brains have developed it. In fact, one may even see compassion as a by-product: as animals learned to simulate others (in order to eat them, for example) they also learned to simulate their peers, which lead to some kind of understanding that these peers also feel pain. Mirror neurons, to which this post is an excellent introduction, may also have evolved in this context.

Monkey surprise

Monkey surprise (Photo credit: @Doug88888)

Monkeys are capable of compassion. Unlike other mammals, theirs extends a little further from their family and into their social group. If a chimpanzee is beat up in a fight, it is common to see another one trying to comfort it by putting its arm around it, something which may look spookily familiar. However, chimpanzees are only capable of compassion within their socia

l group. They couldn’t care less about what happens to individuals outside it.

This is the way it works in humans, most of the time. Every time there’s a plane accident, the first we ask is “were there any people from my country?” We don’t care what happened to all of those foreigners. We want to know that our people are safe. The same thing happened recently with the Boston bombings: even though much more horrid acts take place daily in Iraq or Syria, we shrug them off without much thought, while being struck with grief with the ones that hit close to home.

However, that’s only part of the story. Some humans do feel empathy towards other people that are not in their social group. According to primatologist Frans de Waal, this kind of compassion is “a fragile experiment” being conducted by our species. That is, we are the first species to feel universal empathy. And I think this is significant, because it signals a trend from less compassion to more: from not caring about any other individual to caring about your children to caring about your family, to your social group, to every single member of your species.

Can this trend continue? As we get smarter, be it with technology or evolution, will we become even more compassionate? Is caring for the welfare of animals the next step, which is already taking place? As we get smarter, will we be able to simulate other living beings better? Will that increase our compassion? Where does this lead?

People usually see evolution (rightly) as this really cruel, blind process where the strong step on the weak. However, I find it encouraging that, even so, it seems to have led to the emergence of increasingly compassionate species. This outcome was far from obvious, given the way natural selection works. I like the idea of evolution being a blind, cruel, horrid process that somehow gives birth to a species that stops being blind and cruel. Evolution as a process that can put a stop to itself and become something better, gentler, more nurturing, more creative.

Who knows, maybe there’s still hope for us all.

Enhanced by Zemanta

The Case for an Innate Morality

We have all felt it: the urge to act in a way that, objectively, makes no sense but that somehow feels right. After we do it, we find rational justifications, because that’s what humans do: we construct narratives that explain our behavior, even if those narratives have absolutely nothing to do with what actually happened, as split-brain patient experiments have shown again and again.

It almost seems like magic. Where does that sense of morality come from? Is it written in the heavens? In the fabric of the universe? Is it the result of our subconscious thought? Is it the product of empathy? It turns out science has a lot to say about why we make choices. That’s what I want to cover in this article.

You may be familiar with something called the “trolley problems.” If you are, feel free to skip the next two paragraphs. The trolley problems are a series of dilemmas where a person needs to decide a course of action which will have some negative consequences, regardless of their decision.

The first trolley problem.

The first trolley problem.

The first trolley problem is called the switch dilemma: in it, a trolley is racing towards five children who are playing on the track, unaware of what’s coming. You can save those people by pressing a switch that will divert the trolley to a different set of tracks, where there is only one person. So you have a moral choice before you: will you kill one to save five?

The second trolley problem.

The second trolley problem.

The next scenario is known as the footbridge dilemma: in it, the same trolley is racing towards the same five children, but you’re standing on a footbridge over the rails. Right next to you there is a fat man: if you throw him off the bridge and on the track, the trolley will stop and the five people will be saved. Will you throw him off the bridge to save the children?

Research shows that 90% of people will choose to press the switch in the first case, whereas only 10% will throw the person off the footbridge in the second, even though the outcome is exactly the same (one dies, five are saved). Furthermore, the 10% who choose the utilitarian approach in the second case usually have psychopathic tendencies.

Researchers have been studying this problem for some time by using fMRI machines. They have found that, in the first case, the area associated with rational, conscious thought is activated. However, in the second case, the emotional areas light up, as well as a region associated with conflict resolution. And they’ve gone further than that: they have actually used Trans-Cranial Magnetic Stimulation to inhibit the emotional parts of the brain, getting the participants to choose the utilitarian approach in both cases. That is, we can actually program people to be utilitarian by inhibiting certain areas of the brain.

Read more about this here, here and here.

High resolution fMRI of the Human brain.

High resolution fMRI of the Human brain. (Photo credit: Wikipedia)

This shows that our morality is something far more complex (or primitive) than moral philosophy would have us think. No matter how good a moral theory is, if it conflicts with the emotional centers of the brain, it will be superseded except through a great amount of will-power. Some people argue that our instinctive morality was not invented for this day and age and that, because it is wired as an adaptation to a world that is no more, it is not a good source for making good decisions: that we should be extremely suspicious of what feels right and do instead what makes sense, whether it feels right or not. They have a point: evolution works by making hacks: it doesn’t devise foolproof solutions to a problem, but temporary fixes that produce reasonable results, quickly. In this sense, it is not surprising that what our instincts tell us is sometimes far from what would report the most benefit to our fellow humans and ourselves.

There is much more to be said about morality and science: I am speaking about morality in animals and the fact that our sense of fairness will often lead us to make irrational decisions that benefit nobody. But I think the point above is enough food for thought: more on morality coming soon.

Enhanced by Zemanta

An Objective, Universal Morality

This post is part of a series called the Anti-Week. If you don’t know what it’s about, please read this before you continue reading!

This one’s an easy one. I doubt It will take long to argue. Here’s how it goes.

First, if you haven’t read it yet, read my article “Why We Are All the Same Person.”

Done? Great. I hope you bought my conclusion, otherwise you won’t agree with what follows.

Let’s define subjective morality in this completely hedonistic way: good is whatever is good for the subject (that is, you.) Bad is whatever is bad for the subject (again, you.) Here “good” or “bad” have the usual meanings: that which is conductive to pain or unhappiness is bad; that which is conductive to pleasure, happiness and peace is good. I won’t elaborate further, though I’m aware we could spend several afternoons discussing the particular details.


DOGMA AND MORALITY (Photo credit: andreco)

Now, according to the article above, you and I are the same person. In fact, we are all the same person. “You” includes, well everyone.

Therefore, the subjective morality above (maximize your good, minimize your bad) is instantly applied to the collectivity of humans, so that “your” good becomes “everybody’s.” That is, you should act in a way that maximizes global happiness.

Therefore subjective morality (of an individual) becomes automatically universal (of the whole of humankind) and objective, since it does not depend on the subject.

Furthermore, this morality can be universalized even more by extending it to all sentient beings, including as-yet-undiscovered aliens.

There, done.

I thought this would be harder.

Enhanced by Zemanta

Mind Enhancement, Sooner than You Think

I just read this amazing article at H+ magazine. It explains that there is a new family of cognition-enhancing drugs (nootropics) with no side-effects. You heard it right: more intelligence, better memory, improved concentration, no side-effects.

At the moment, these are prescription drugs, so you can’t just go and buy them at your local pharmacy. However, some companies keep popping up that sell these on Amazon or Ebay. As they get closed down by the FDA, others appear. So plenty of people are already experimenting with them, boosting their cognition daily. You can find a first-person account of the experience here.

Two questions arise. First: is this a good idea? Second: should these drugs be readily available to anyone who wants to use them?

Nick Bostrom, a Swedish Oxford-educated philos...

Nick Bostrom, a Swedish Oxford-educated philosopher, at a 2006 summit in Stanford. (Photo credit: Wikipedia)

According to Nick Bostrom, the answer is “yes” in both cases. In 2008 he suggested the following scenario, which has already become a reality: imagine you had a nootropic drug that could boost cognitive performance by 1%. Now, if every scientist in the world started taking it, assuming there are around 10 million of them, the effect would be equivalent to having 100,000 more scientists worldwide. That represents a bigger contribution to progress than an Einstein or a Newton. So, according to Bostrom, taking these drugs would not only be morally acceptable: it would be the right thing to do.

But there’s more. Actually, you don’t have to take any drugs to boost your cognitive performance. The alternative are “smart-hats.” What these do is stimulate some areas of the brain by introducing a low-level electric current. While that may sound like science-fiction (at best) or a scam (at worst) it turns out they are already being used by DARPA, which reports a 250% improvement on rates of learning in individuals who use it.

I can think of many uses for this. For example, I am learning Chinese and German, both at an excruciatingly slow rate. Some days I feel too sleepy; some days I’m just tired. Some days I cannot concentrate. If I could speed up the process by 250% percent, I would save a substantial amount of time and money. Other uses would be, for example, writing this blog: I could make my entries more interesting, witty and engaging. Or I could take less time writing them, which would mean spending more time reading other people’s blogs or playing computer games or whatever ticked my fancy.

Nootropil II

Nootropil II (Photo credit: Arenamontanus)

So where do you get one? Well, until now these sets cost around 600 $ each. But this company, Go Flow, is creating a do-it-yourself prototype that should be substantially cheaper and which you can pre-order at their website.

All of this raises some ethical issues, as always. For example: where do we set the limits? Maybe it is OK for a scientist to use these drugs in order to produce better research. Is it OK for a student taking an exam? What if all students start taking these drugs and universities decide to make their content harder in turn? The pressure to use nootropics would then be enormous. What if some parents make their children take them so they perform better at school? What if a company makes it a requisite for us to take these pills in order to hire us? What if they force us to wear our smart-hats at all times while at work?

These are things that will have to be sorted out, sooner rather than later. Nootropics will be the next big thing. Bigger than Viagra. They are coming soon – one year is my prediction – and they are here to stay.

Of course, I have no faith in politicians noticing the issue and legislating about it before problems start. As always, we will have to wait and see until someone goes to far. After all, it’s what we always do.

Disclaimer: This article was not written under the effects of any nootropic drugs or cognition-boosting helmets.

Enhanced by Zemanta

David Yerle Writes about Ethics


Codes of ethics can be written down and normally are

David Yerle writes about ethics. He knows he is about to enter a murky subject and hopes he won’t cross any lines, but is pretty sure he will. After all, ethics is one of the areas where people, no matter how shaky their intellectual formation, seem to be the surest of their beliefs. So he does not hope to convince anyone. But he does hope to create a tiny crack of doubt in those who don’t believe as he does and maybe a smirk in those who do.

David Yerle does not believe in morality but that does not make him a bad person, or so he hopes. On the contrary, it makes him a good person without the right to feel good, which is something few good people are able to bear. However, David Yerle bears it with some dignity and a lot of whining.

David Yerle used to be a Kantian when he was a kid, became a Nietzschean as a teenager and was mesmerized by Derek Parfit as an adult. He makes a mental note to write blog posts on what matters and on individuality.

The view he will defend from here on will be more akin to his Nietzschean teenage years than to his Parfitian adulthood. He will explain his change of opinion in later posts, so people shouldn’t be too concerned about what he is about to write.

David Yerle believes the universe is ruled by laws. Whether we know this laws or not is a matter of irrelevance: what is important is that these laws exist. David Yerle thinks the LHC is pretty convincing proof that the universe is ruled by laws. He believes so because the precision required for one simple prediction excludes a random behavior of the universe by several orders of magnitude.

If the universe is ruled by laws, there are only facts. A stone falls. Fact. A river flows. Fact. A stone falls on someone’s head. Fact. Bad fact, for the person involved. But not bad from a cosmic point of view. Since the universe doesn’t care. The universe is an indifferent bastard. Or bitch, depending on your gender preferences.

Another way of looking at the same thing is by considering why people should do good deeds. In fact, one first needs to consider what good deeds are. If one believes in the Bible the answer is easy: good is whatever the Bible says is good. Problem solved. The same goes for most religions, except for Buddhism or Taoism which actually don’t have very clear-cut morals. If one does not believe in the Bible one needs to try to find an answer. The reasons one needs to find an answer is it wouldn’t be nice to find out one has been doing the wrong thing all along.

The problem is, with no deity to show us the way, there doesn’t seem to be any possible path. “Make other people happy” or “don’t screw them up,” as a more conservative approach, seem to work but why? Why shouldn’t one screw other people up? Without a God to tell you it’s bad, why should you care?

David Yerle believes you shouldn’t care but, unless you’re a psycopath, you do. And that is probably a good thing. You do because it’s ingrained in your being. The moral sense is a part of being human and people, unless they work for a bank, cannot turn it off. So maybe it would make sense to stop asking why on Earth you should do what’s good and just focus on what your gut tells you. Not because it’s good, mind you, but because it will make you feel good, in the same way that eating while you’re hungry does.

David Yerle is not totally sure.

David Yerle wonders.