Category Archives: mathematics

The Fuzz about Fuzzy Logic

You have a heap of sand and take a grain of sand from it. Is it still a heap? If you answered “yes,” then, by virtue of induction, you will be forced to admit that one grain of sand is also a heap, as well as no grains of sand, since I can recursively ask you the same question for a million times and you will be forced to give me the same answer. This is the famous Sorites paradox and it is one of the many reasons there are for abandoning classical logic.

Classical logic can be thought of as common-sense logic. For each statement, there are two possibilities: it is either true or false. There is no such thing as a half-true statement. This seemingly obvious requirement actually causes problems, such as the paradox above. It is precisely this which allowed me to mathematically prove immortality is moral a week or so ago: by not allowing the sentence “living for X years” to be more or less true, the reader got stuck with a black or white choice.

Fuzzy logic, invented in the seventies by Lofti Zadeh, aims to change all of that. The idea is extremely simple but very powerful: expand the concept of truth. Let a sentence be 70% true or 30% false, depending on your aesthetic preference.

en: A clothes dryer using "fuzzy logic&qu...

A clothes dryer using “fuzzy logic”. This is a Candy. (Photo credit: Wikipedia)

Unlike what it may seem, this has nothing to do with probabilities. That a sentence is 70% true does not mean that it has a 70% percent probability of being true: that would be falling back into old, classical logic. What it means is precisely what it says: the sentence is partly true and partly false. The perfect example is a glass which is 70% full. Is it full? Well, kind of. Is the sentence true? Well, kind of.

Fuzzy logic solves the Sorites paradox by allowing the truth-value of a sentence to decrease gradually. If I remove a grain from a heap of sand, is it still a heap? Yes, but less so than before. For example, we could say that the truth value of “this is a heap” decreases by 1 divided by the total number of grains of sand, for example.

There seem to be several issues with this. For example: how do you assign the truth-values? Isn’t it arbitrary to say that a sentence is 53.4% false? How do you know it’s not 53.5?

Here there are several points to be made. Firstly, the beauty of fuzzy logic is that the particular numbers do not matter, but only the relationship between them. That is: any transformation that leaves the hierarchy of truths untouched will not affect the outcome of our operations. Secondly, the sentence “this sentence is 53.4% true” is also fuzzy: there is no point in asking whether it is true or not, but only in asking how true it is. In this sense, one could think of a meta-fuzzy logic on fuzzy predicates themselves.

Fuzzy logic is an offshoot of another branch of mathematics called “fuzzy set theory.” In this case, the idea is even simpler: one element can belong to a set to a certain extent. Using the example from before, we could say that a glass which is 70% full belongs to the set of full glasses to a 70% degree. This will be important later, when defining fuzzy integrals.

English: 3D Graphical Representation of an ope...

English: 3D Graphical Representation of an operator usable in fuzzy logic (a set of several fuzzy operators is provided) (Photo credit: Wikipedia)

Fuzzy logic is not speculative mathematics and it is not open to debate, in the same way that Riemannian geometry is not open to debate. It is routinely used in electronics and artificial intelligence and powers almost every washing machine in the planet, as well as most likely the brakes of your car. So it is not just an idea put forward by some detached mathematician: it is being used every day to run stuff in your everyday life.

Fuzzy logic can also be used in philosophy for a number of things. A lot of philosophical problems can be dealt with by realizing we’re using classical logic instead of fuzzy statements. Take, for example, individuality. I think I am an individual; however, my two brain hemispheres are not. If you and I were connected with the same bandwidth as my two hemispheres (sharing thoughts, memories, perceptions and the like) we would most likely feel as if we were one individual. One could use this to justify there is no such thing as an individual, since we cannot draw the line between individual and non-individual.

However, this can be easily overcome using fuzzy logic. One can allow for the sentence “X is an individual” to be fuzzy, thus having truth-values between zero (totally false) and one (totally true.) In fact, it is possible to define the degree of individuality as:

I = 1 – (Information exchanged externally) / (Information exchanged internally)

Which, as you can check for yourself, behaves properly for the extreme cases. Bear in mind, though, that any other definition preserving the truth hierarchy would work just as well.

Warm fuzzy logic member function

Warm fuzzy logic member function (Photo credit: Wikipedia)

On to more abstract stuff. If you are allergic to math, feel free to skip the next two paragraphs!

You may be familiar with something called an “integral” in mathematics. An integral is just a sum of some value over some region of space. A perfect example is the height of a mountain at different coordinates. An integral adds up each of these values (the height at point A plus the height at point B and so on), multiplied by the (infinitesimal) area element they are on.

Plot of approximations to integral of sqrt(x) ...

An integral of the square root function: we add up each value, multiplied by the length increment. (Photo credit: Wikipedia)

You can imagine an integral as adding up all the values that belong to a certain set, defined by the area or volume where we are performing the addition. But what if we make this set fuzzy? What if we say “the points in the middle count for sure, but for the ones at the border we’re not so sure”? In this case, the number we will get at the end will be a fuzzy number, determined by how strongly each point belongs to the set. This can be particularly useful for determining areas of regions which are not well delimited, for example.

Fuzzy logic is not the only alternative to classical logic. There are other contenders, such as intuitionist logic (which is pretty similar, though) or quantum logic, each of which has its own merits, demerits and areas of application. I am particularly attached to fuzzy logic because I derived it independently using something called “tensors” in my early twenties and was fascinated (and a little bummed) to discover it was already invented. I also think it may be the answer to predicaments such as the one brought about by Gödel’s theorem, an Earth-shaking mathematical result that I will tackle at some other point.

Enhanced by Zemanta

A Mathematical Proof that Immortality is Moral

(Disclaimer: this is meant as a joke, not as serious proof.)


1. There is at least a number N such that living for N years is moral.

2. If living for M years is moral, then so is living for M + 1 years.

From here we reason by induction:

3. Living for N years is moral  (because of (1)).

4. Living for N + 1 years is moral (because of (2)).

5. Living for (N + 1) + 1 years is moral (because of (2)).

6. Living for ((N + 1) + 1) + 1 years is moral (because of (2)).

7. Living for ((((….(N + 1) + 1) + 1)…) years is moral (because of (2) applied repeatedly).

Rearranging the terms from (7) we get:

8. Living for N + M years is moral, where M is any integer, no matter how large, and N is any arbitrary integer.

Therefore, it is moral to live for K years, where K is any integer which can be arbitrarily large. In other words, it is moral to have an arbitrarily large lifespan: for any number of years, it can be shown there is always a larger number of years for which living is moral.


Enhanced by Zemanta

Universes and Minds

Today I want to argue that our primary impressions (color, smell) are as real as primary physical quantities such as electric charge. That is, our minds can be seen as miniature universes and are as real as any physical quantity.

If you’re read this blog for a while you may be familiar with the simulation argument. Just in case, I will repeat it here. It goes like this: how can you be sure you’re not living in a perfect simulation?

Now, before I go on, I must give some clarification. This does not mean you are real and connected to a Matrix-like simulation: this means you are a simulated being inside a perfect simulation. That is, you are made of (virtual) protons and electron that interact according to some (virtual) laws of physics. Every single detail, down to the smallest particle, is taken into account. Every experiment gives the same result. Everything is exactly the same: that’s why the simulation is perfect. You are surrounded by other (virtual) people who are also made of (virtual) atoms.

Abstract Colorful Universe Wallpaper - TTdesign

Abstract Colorful Universe Wallpaper – TTdesign (Photo credit: tomt6788)

So, can you tell whether you’re living in such a simulation? The obvious answer is you can’t. The only way would be to find some kind of glitch but, since the simulation is perfect, there can be no glitches. Therefore, your life in a perfect simulation would be identical to your life now. That is: a perfect simulation is indistinguishable from reality. But what is a simulation made of?

A simulation can be thought of as a series of abstract operations performed by a Turing machine, which is some kind of idealized computer with infinite resources. Turing machines do not depend on the substrate: I can make them with cheese, chickpeas, cars or people. Their output will be the same, as long as they follow the same abstract rules. That is, a simulation is a series of abstract operations that are background-independent. For all we care, there is no physical substrate, since any substrate will do the trick.

Since our reality is exactly equivalent to a bunch of abstract operations with no physical substrate, the next logical step is to assume that reality is nothing but a bunch of abstract operation with no physical substrate.

Turing machine 2

Turing machine 2 (Photo credit: Wikipedia)

Now to the mind. Again, the same question: how can you tell your mind is not a perfect simulation of a mind? And, again, you can’t. Therefore and by virtue of the above argument, your mind is nothing but a bunch of abstract operations without a physical substrate. But here’s the catch: the existence of the mind does not imply the existence of a parent universe.

Imagine this situation: there’s a universe where, instead of protons and electrons, we have something called “blobs” which interact in a bizarre manner. Now, these blobs are arranged in such a way that they give rise to the set of abstract operations that defines your mind. In this case, you would still have the perception of living in a universe with protons and electrons, even though your mind would be simulated in a completely different environment.

Another way to see this is by realizing that our mind’s operations are more high-level than our universe’s. That is, we don’t need all the complexity of our universe to create our mind. Our mind’s abstract processes can be greatly summarized into a set of rules that is completely different from that of our universe. Any reality that implements this set of rules will give rise to your mind. Our minds are a set of abstract rules without a substrate and these rules are different from those of their containing universe. They are emergent, in the sense that they can be derived from the lower-level rules but, once they are, they can be implemented on their own.

This means our minds are somehow disconnected from reality, in the sense that we cannot know in which reality they are b

English: Snapshot from a simulation of large s...

English: Snapshot from a simulation of large scale structure formation in a ΛCDM universe. The size of the box is (50 h -1 Mpc) 3 . Run using GADGET (GPL software) (Photo credit: Wikipedia)

eing implemented. For all we know, our minds could be a universe in themselves: just as we assume the laws of physics are all there is and not part of a bigger reality that gives rise to them, we could assume our minds are similarly self-contained. Just like a miniature universe (and in fact probably bigger, since the complexity needed to specify a mind is probably greater than that needed to state the laws of physics) our impressions could just be the fundamental constituents of our universe. The color “blue,” then, would be as real as the electric charge. It is no wonder, then, that we cannot express the impression of “blue” through the equations of physics. “Blue” is part of a different, emergent set of rules and is a fundamental (irreducible) object of those.

This idea actually helps me. Before, I would listen to a melody and think its beauty wasn’t real, since it was just a bunch of pressure differentials in the air surrounding me. Now I can interpret a melody as a set of sounds, which are fundamental constituents of my reality. The melody and the sounds are real. The pleasure is real. This makes things more meaningful for me.

Funnily, this goes in exactly the opposite direction as all of the self-denying articles I’ve written before. So what do I believe? I’ll say this much: what I believe is irrelevant. I only have faith in doubt.

Enhanced by Zemanta

A Crash Course on the Mathematics of Quantum Mechanics

This is an attempt to teach the math of quantum mechanics to anyone with a high school diploma. Of course, I won’t be completely thorough and I will omit important parts: in particular, I will not use complex numbers, since it would make things too complicated for many people. However, I do believe the main ideas are there. Once you finish reading this article, you should have a working understanding of Hilbert spaces.

So let’s get started.

Hilbert spaces are an extension of our regular 2- or 3-dimensional spaces. I will use 2-dimensional geometry in my examples for the sake of simplicity. The idea is to extend our usual concepts of length and distance (and a bunch of others) to spaces with an arbitrary number of dimensions.


A set of points in a 2-dimensional space.

You will probably remember from high school you could represent points in a certain 2-dimensional space by using two numbers separated by parentheses. In the image, you can see the coordinates of several points.

We can find the distance between two points by creating a vector. A vector is just the subtraction of one point minus the other and can be represented as a little arrow between them. For example, the vector between the points (4,3) and (2,0) is:

v = (4,3) – (2,0) = (4 – 2,3 – 0) = (2,3)


You can see how the vector makes a rectangle triangle with the x axis and a parallel to the y axis

Where we just subtract each pair of numbers. A vector, then, is a set of two numbers. We can imagine it as an arrow that goes from (0,0) to the point defined by its two numbers.

What’s cool about vectors is we can define their length. Since a vector forms a rectangle triangle with the axis (look at the image) all we need to do is apply Pythagoras’s theorem to find it. Thus, the length of our  vector above is:


Vectors also have a really cool operation defined on them, called the scalar product. This is just taking two vectors and multiplying each number, then adding them up. For example, the scalar product of (3,2) and (4,1) is:


There are several cool things about the scalar product. One is that, if two vectors are perpendicular, their scalar product is zero. The other one is that the scalar product of a vector with itself can never be negative. This happens because any negative number squared becomes positive. In fact, the length of a vector can be defined as the square root of its scalar product with itself! That is:


This will become important later: if we have some space where we have defined a scalar product, it is trivially easy to add a definition of length just by taking its square root!

Another fundamental concept is that of a base of unit vectors, which is really not as complex as it sounds. A unit vector is a vector with length 1. For example, (1,0) is a unit vector. I can express any vector as a sum of unitary vectors which are perpendicular to each other. For example, the vector (4,3) can be expressed as:

(4,3) = 4 (1,0) + 3(0,1) = (4,0) + (0,3) = (4,3)

Just like I used (1,0) and (0,1), I could have used any other pair of unit vectors, as long as they are perpendicular to each other. Depending on the dimension, I’ll have more or less such vectors, called “orthonormal.” In two dimensions, I have two: (1,0) and (0,1) (or others, but always two at a time). In three dimensions, three: (1,0,0), (0,1,0) and (0,0,1). In N dimensions, I’ll have N such vectors which we normally label ei, where i is a number that goes from 1 to N.

What we’d like to do is transport all of these notions to a system with an arbitrary number of dimensions. We would also like to generalize the notion of a vector, so that the damnedest things can also be considered one.

A Hilbert space can be thought of as a certain set of elements (we don’t know what they look like yet) that obey certain properties. These elements could be, for example, vectors, and their properties those stated above. In order to be a Hilbert space, these elements have to have the fundamental properties that define a Hilbert space. If we can prove that a certain set of elements (for example, cars) has the fundamental properties of Hilbert spaces, we can automatically prove that they have every single other property that Hilbert spaces have. The power of this approach will be seen later on.

So let’s define a Hilbert space.

First, a Hilbert space must have an inner product. This inner product is an operation that takes two vectors and gives us a number. We’ve already encountered one such inner product: the scalar product above.

Because we’re now too cool for school, we don’t write the inner product of x and y as a multiplication. Instead, we write it as <x|y>, where I’m using the notation already used in quantum mechanics to make the transition smoother. It would be quite tedious to get into each property of the scalar product, so I’ll just mention the most important ones. It has to happen that:

$latex =$

(we don’t want the scalar product to change if we change the order of the vectors).

$latex \geq{0}$

This is extremely important: if we want to define a length based on the inner product (remember, we will take its square root) then we need the inner product to be always positive or things will break (we would have imaginary distances, which make no sense).

Now we can define the length of a vector in the Hilbert space. We don’t call it “length” though, but “norm.” So we define the norm as:

$latex \left||x\right||=\sqrt{}$

See? Once we have the scalar product, getting the length is a breeze! Oh, and if we want to find the distance between two points, all we need to do is calculate the norm of their difference, that is:


Now we almost have a Hilbert space. The only requirement left is for it to be complete. This basically means that every point must be accounted for: we can’t have “holes” in our space. I won’t go deep into this or your brain will explode.

So now that we have a Hilbert space, we can use it. For example, we can show that functions (yes, like y = sin x) make up a Hilbert space. In fact, they make up an infinite-dimensional Hilbert space. Why? Well, we can imagine a vector as a mapping from natural numbers to real numbers. For example, the vector (2.5, 3.2) is nothing but a mapping like:

Coordinate 1 -> 2.5

Coordinate 2 -> 3.2

A function is also a mapping. For each x, we have a y. However, in this case the function assigns one number to each real number. Since there’s an infinity of real numbers, functions are a mapping of an infinity of coordinates to an infinity of numbers. Hence, infinite dimension.

But fear not! Because we can easily operate with functions, just as with vectors. All we need to do is define a suitable scalar product. Once that’s done, we’re set. In the case of functions, we define the scalar product as:

$latex =\int_a^b{f(x)g(x)dx}$

definite integral

The integral is just the area under the curve.

Don’t be intimidated by the integral sign. All that we’re doing is adding up the y value for every point in the interval between a and b. In this case, we’re adding up the product of f(x) and g(x) for each x in the interval between a and b, which is equivalent to calculating the area under the curve defined by it. Look at the picture to see it more clearly.

(Here, a and b are arbitrary points. In fact, each choice of points defines a different Hilbert space!)

It’s easy to check the integral meets our requirements. For example, swapping the order of the functions has no effect (see the first requirement above). Similarly, the integral of a function squared has to be positive (since the square of anything cannot be negative). Therefore, its integral (which is the sum of all values) has to be positive too! There, we have a working scalar product.

And, since we have a scalar product, we can define a length. As I told you before, we define the length of a “vector” (in this case, a function!) as the square root of its scalar product. And there you go! We have an infinite-dimensional Hilbert space.

Now for something really cool.

Just like in the 2-d vector space, it has to be possible to express each function (which is a vector) as a sum of a certain number of unitary vectors. That is, in 2-d we could express the vector (4,3) as a sum of 4 x (1,0) + 3 x (0,1). In infinite dimensions we need an infinity of unit vectors, but we can still do the same! This tells us:

  1. I can find an infinity of functions that are “perpendicular” (their inner product is 0) to each other and which have unit “length.”
  2. I can express any other function as an infinite sum of those.

This is an immediate consequence of the properties above.

This is huge! For example, I can create a family of unit functions that look like “sin x”, with some modifications. This is commonly known in engineering as a Fourier series. It is automatically true that any function (for example, x2) can be expressed as a sum of them. That is, x2 is equal to an infinite sum of sine functions! Similarly, it can be shown that sine functions are equal to an infinite sum of powers of x. And a long, long etc. That is, I can express any function I bloody please as an infinite sum of other functions that apparently have absolutely nothing to do with it.

I have probably messed with your head more than usual by now. I hope you understood something. I tried to be as clear as I could, but I may not have succeeded. Please let me know if you need some clarification below. I plan to make another post where I’ll show you how to actually apply this to quantum mechanics (that’s the easy part) which should enable you to understand some of the more technical discussions on the matter.

I hope I didn’t waste an afternoon (yes, this took long) writing something only physicists and mathematicians can understand!

The Different Faces of Mathematics

Even though most of us use mathematics in our daily lives, few of us ever think about what mathematics is about. We add, subtract and divide routinely, without thinking twice about what we’re doing. We assume math is something that relates to numbers and the relationships between them. We assume wrong.

In this article I want to take a look at the different ways of looking at math and its meaning. I will adopt a pseudo-historical approach: by this I mean I will make the different views unfold like a story in something that may look like history but probably has absolutely nothing to do with actual history. I try to keep my articles reasonably short, so there will be flagrant omissions: for example, I purposely ignore the intuitionist view for two reasons: first, it didn’t fit well with my narrative; second, I am not well-versed in its philosophical underpinnings. So feel free to expand on the topic in the comments below.

Mathematics started indeed being about numbers and how to add them up. It was a practical matter, more than an intellectual one. “You owe me three pigs and a goat.” That was math. The notion got refined as operations got more complex, especially with the appearance of geometry. It quickly became apparent that, applying elementary reasoning techniques (logic) to some set of simple, o

bvious truths, one could get extremely complex and non-obvious results. This lead many to asks themselves: what are we really doing when we do math?


Plato. Didn’t look too cute in this photo.

Probably the Platonic answer is the most popular. According to Plato, we could access mathematical truths because we had experienced them before in the realm of ideas. That is: somewhere, there is such a thing as a perfect triangle, of which every other triangle is but a rough copy. Mathematical statements, then, are statements about these perfect objects which exist outside our current realm of experience.

The Platonic approach was ontologically loaded: it assumed the world to be a certain way. It turns out most mathematicians don’t like the world much. They like what goes on in their head much more. So an answer based on reality was not satisfying to many, who were looking for a much more abstract, mathematically-minded way of looking at things.

A view that fit much better with this desire for abstraction was that of mathematics as the expression of logic. In this sense, mathematics would be nothing but pure logic, applied to certain propositions. Those initial propositions, given without proof, were called axioms. If one sticks to this interpretation, mathematics is the only branch of knowledge that deals with absolute truths, even if the initial axioms are not true. This is so because all that mathematics state is: “if these series of axioms is true, then so are these other statements.”

An example will clarify things. Imagine my starting axioms are:

“I have four arms.”

“Every person with four arms has two heads.”

According to this, I can state with absolute certainty:

“If the two axioms above are true, then I have two heads.”

This statement does not depend on the truth of the ones above. It is true, regardless.

But mathematicians (and logicians) are way more strict than that. They’d say: “you can’t just state “use the laws of logic to infer new truths” and start producing theorems. You need to specify which laws of logic you’re using.” Hence came mathematical logic, which can be seen as the systematization and symbolization of thought. Mathematical logic was taken to its modern form by Russell (sorry, Tongue Sandwich) and Whitehead in their famous Principia Mathematica.

Venn diagram for the set theoretic intersectio...

Venn diagram for the set theoretic intersection of A and B. (Photo credit: Wikipedia)

In this new framework, mathematics had just become the manipulation of symbols according to certain rules. The laws of logic (inference rules) determined which new chains could be built from pre-existing ones, starting with a set of arrays of symbols that were just a given (the axioms.) Funnily enough, instead of taking mathematics closer to truth, this abstraction took it further: mathematics, in fact, wasn’t about truth at all. It was just about manipulating chains of abstract symbols, the meaning of which – if they had any – was to be determined later. In fact, determining the meaning (the possible applications) of a certain mathematical theory was not considered part of mathematics, but applied mathematics. Mathematics just dealt with the abstract relationships between symbols, whatever they meant.

The formalization of logic opened the door to alternative logics. If logic is just a set of rules for creating new chains of symbols, can’t we use a different set? The 20th century saw the appearance of many such alternative logics: intuitionist logic, fuzzy logic or quantum logic, to name a few.

This vision of mathematics as the systematic manipulation of symbols opened another door: the possibility of automatization. If all we are really doing is applying pre-determined laws of transformation to a series of symbols, it should be possible to make this process systematic. The arrival of computers (which, by the way, was closely related to the development of mathematical logic) provided such a way. In this sense, a mathematical theory could be seen as a computer program: given the following input (axioms), apply your inference rules (instructions) to find every possible theorem. Mathematics could then be seen as the set of all possible computer programs that can be executed by a machine with infinite memory and capacity (or infinite time to perform its operations.)

After that, things got more complicated. Notions of soundness and completeness were explored and a lot of Earth-shaking results were obtained. I will go into these more deeply in following articles.

For now, I will give your brains a break. I know mine needs one.

PS Question: being a non-native speaker, I’m confused. Mathematics: plural or singular? In Spanish it’s plural. In English I think I’ve seen both uses. Yeah, I know, asking this question about a whole blog on mathematics completely undermines my reliability. Can’t be helped.

Enhanced by Zemanta

The Logic behind Logic

In my kamikaze search for truth I have taken to questioning absolutely everything. I have found that probably the most ingrained of our prejudices, even more than the existence of the self, is the validity of logic. In fact, I used logic to question the existence of the self and nobody thought of questioning its use. That happens because, without logic, we are left without tools to argue, to discuss, to think. A discussion without logic is not a discussion. Without logic, nothing we believe has any foundation.

So why is logic “true”? What basis do we have for believing in logic? The problem with this question is we cannot use logic to answer it. For example, if logic is true, then if A implies B and B implies C, then A implies C. Also, it cannot happen that A and not-A at the same time. When logic is broken, none of this means anything anymore. So, if I say, “logic is true because it works” I am actually making this logical inference:

  1. Things that work are true.
  2. Logic works.
  3. Therefore, logic is true.

But I am not allowed to do that, since I would be using what I want to prove (logic) in its own proof. We can keep trying, but you’ll soon see it’s a fool’s errand. We could, for example, say: “logic is true because the universe has mathematical laws.” This would look like:

  1. The universe has mathematical laws.
  2. A universe with mathematical laws implies logic.
  3. Logic is true.
Bertrand Russell's views on philosophy

Bertrand Russell’s views on philosophy (Photo credit: Wikipedia)

Again, we’re not allowed to use this type of inference without already assuming what we set out to prove. Or we could say: “all of our science and technology are based on logic. If logic wasn’t true, nothing would work.” But again, we’re using a logical inference. You could have an illogical universe with perfectly fine science and technology. Lack of logic allows you to infer anything. This is not a valid argument.

I challenge the reader to find an argument for the validity of logic that doesn’t use logic. I haven’t been able to find one and I don’t think it is possible, since the very notion of argument necessitates logic.

This, of course, doesn’t mean I’m going to start operating illogically or that I’m going to stop believing in the validity of scientific discoveries. But it shows how shaky the ground under our most deeply-rooted beliefs is.

It also shows that there is nothing special about logic. That’s why it’s perfectly fine to use modified logics, such as fuzzy logic, to try to understand the world. None of them have any justification, so we just take the one that fits, even though the fact that it fits means absolutely nothing.

If we can’t trust logic, what the heck can we trust? What on Earth can we know? Any suggestions?

Enhanced by Zemanta

Immortality? Not for Me

This post is part of a series called the Anti-Week. If you don’t know what it’s about, please read this before you continue!

Some people want to be immortal. Why? Beats me.

We spend most of our lives worrying. We’re  either replaying our mistakes or obsessing about the future. I can recall very few moments when I’ve been completely, truly content. I’m always thinking about something else: what I’ll do after. What I’ve done before. What I could be doing instead.

Even if I wasn’t, most experiences aren’t that great. They’re pretty neutral, really. For example, writing this. I can feel my brain straining with effort, finding a justification for an opinion I don’t share. Typing. Yes, there’s feeling in my hands. It’s not a great feeling. In fact, my right arm feels kind of tired. This is not pleasant. I do this so that I have something to show. I try to have something to show so that I can express something inside me. Why do I bother? I don’t have the slightest.

Most people argue that happiness is a state of contentment. A state where you don’t want more than you have. This doesn


There are many forms of “pain” (Photo credit: zigazou76)

’t mean you have to enjoy what you have: not wanting more is enough. Happiness, in this sense, seems like a lack of negative emotions. That is what we pursue. Not the light at the end of the tunnel but simply being free from pain. It is a pretty bleak prospect and, nonetheless, most people never actually get there.

Let’s say that again: all we can aspire to is to be free from pain. Most of us never actually achieve this admittedly modest goal.

If that’s not depressing, I don’t know what is.

So why do we keep living? Why don’t we kill ourselves?

I believe the only reason we don’t jump out the window is hope. Most humans are unhappy most of the time, but our hard-wired self-delusion kicks in and makes us believe things are bound to get better someday, so we stick around just in case. The reason we don’t kill ourselves is not that living is worth it: it’s just that we think it might eventually be so!

Hope, then, is a bitch. Because old age only makes things worse. We go from fearing the death of our loved ones to experiencing it. Not a fun thing to experience. We go from fearing pain and illness to being sick and in pain. Old age sucks, really. Even more than the rest of our already pretty unbearable lives.

So what is death, then? Death is a failsafe. It is the only way to make sure we don’t continue with our deluded unhappiness for the rest of eternity. It is a way to make sure our suffering ends sooner or later, to free us from pain once and for all, whether we want it or not.

Death is the only thing that can free us from ourselves.

Who in their right mind would like to avoid it?

Enhanced by Zemanta

Fractals Made Easy, Kind Of

You’ve probably heard of fractals. Most of us have at least seen one: they are these funny-looking pictures with lots of edges, colors and complexity. One example is depicted below.


Fractal (Photo credit: rosepetal236)

Fractals, of course, are more than a pretty picture. They are, in fact, a mathematical object. One could argue that the picture is not the fractal itself: the fractal is the mathematical abstraction and the picture is its representation. In fact, if you know the mathematical expression of a fractal, you know all there is to know about it. No matter how complex, how intricate the picture gets, all of its complexity is already contained in a simple formula. That is the beauty of fractals. That is why I think they can teach us a great deal about the universe itself, maybe even about us.

But I’d better stop digressing and start explaining what the heck a fractal is.

You can view a fractal as a recurrence relation: that is, an operation that is performed over and over and over again. This recurrence relation is performed on a certain geometrical shape. An example will surely clarify things.

Imagine I start with this triangle.


Now, I do two things: the first one is to shrink it to one half of its size. The second one is to place three copies of it, one at each vertex of the old triangle. I get this:


I can do this again. In fact, I can do it for as long as I want. I get a sequence like this:serpinski

That is a fractal. Not any of those pictures, but the result of infinitely performing this operation of shrinking and moving. That’s it. A fractal can be viewed, then, as an operation of shrinking and moving applied an infinite number of times. In fact, the beauty of all of this is that the initial shape does not really matter: I could’ve started with a picture of Chuck Norris instead of a triangle. The resulting shape would have been exactly the same. Let me emphasize this: the fractal is the shrinking and moving. Nothing else.

Fractals are extremely cool for a number of reasons. The first one is that they have non-integer dimensionality. Let me explain this better so it blows your mind properly. You’ve probably been told that a straight line has one dimension, a plane has two dimensions and an object with volume has three dimensions. The dimension, in this case, tells us how many numbers we need to specify the position of a point: for a line, we need one; for a plane, two; for an object with depth, three.

A fractal, however, can have dimensionality 1.23, for example.

So what on Earth does that mean? First, I must point out in this case we need to use an expanded view of dimensionality, called “Hausdorff dimension.” The idea of the Hausdorff dimension is to see what happens to the “volume” of a certain shape when we multiply its length by a certain number. I put volume between quotes because this is not your usual definition of volume.

So let me explain what I mean. The “volume” of a segment is just its length. If I make it twice as long, its “volume” (length) will become twice as much. I can look at this the opposite way: how many segments will I get if I divide the original one in parts measuring ½ of its size? Well, I’ll get two segments, of course.

Let’s call the number of segments I get “n” and the reduction in size (in this case, ½) “s”.

Now, if I take a square instead of a line, its “volume” is its area. How many squares will I get that have sides ½ of the length of the original one? In this case, the answer is 4. So n = 4 and s = ½, since I still divided the original length by two.

If I take a cube, its “volume” is just its volume. How many cubes do I get with sides with length ½ of the original one? In this case, I get 8. So n = 8 and s is still ½.

You can see what I mean in the following figures:linesquarecube


Now from the examples above you will agree with me that I can determine the number of squares I get from the dimension of the figure. It has to happen that:

n = 1/sd

Check it for yourself. For 1 dimension and s = ½, I get 2; for 2 dimensions (square) and s = ½ I get 4; for 3 dimensions (cube) and s = ½ I get 8. If, instead of dividing the segments by 2, you did it by 3, you’d get 3, 9 and 27.

Now for the mind-bending stuff. I can take that equation and turn it around on its head. I can say: if I know how many squares I can make from reducing the size by s, I can get infer the dimensionality. Using some logarithmic math (I hope you remember from high school) I can get:

log n = d log s

And, from it:

d = log n / log (1/s)

This is the formula you can use to calculate the dimensionality of a fractal set.


Triangular-serpinski (Photo credit: Wikipedia)

Let’s take the triangular figure we had before, called the Serpinski triangle. In that case, we start with one triangle: then we make the length of one of its size twice as small, so s = ½, and we are left with 3 new triangles: n = 3. From the formula, we can easily calculate:

D = log 3 / log 2 = 1.58

What does this even mean? Well, it means the Sierpinski triangle takes up more space than the “number line” (mathematicians will call it the “Real line”), which is the line spanning every number from minus infinity to plus infinity. But it also takes up less space than the whole number plane (again, mathematicians will know this as the “Real plane”).

Many shapes are fractal. In fact, almost every shape occurring in Nature is. Not all of them can be easily described by a mathematical relationship. In fact, the definition of fractal is quite loose and it only implies a high degree of recursivity and some kind of self-similarity. Coastlines are fractal; lighting is fractal and so is broccoli. Anywhere you see a rough, infinitely curved shape, you are seeing a fractal.

So there you go, a hopefully understandable explanation of fractals that could have you calculating dimensionalities, if you so choose. If you’re interested, there is plenty of other information online (though a lot of it is quite abstract) and even programs to generate and visualize your own. Here are some links: (make your own Serpinski triangle) (Create all kinds of fractals) (Explore two classic fractals, the Mandelbrot and Julia sets)


PS I have to thank Johannes Nelson of Chasing Wild Geese for giving me the idea of writing this post.

Enhanced by Zemanta

The Wonder of Mathematics: non-Euclidean Geometry I

The beauty of mathematics lies in unification: in the taking of two completely unrelated concepts and merging them into a deeper, superior whole. A perfect example of this is Riemannian geometry. In it, Riemann not only devised a framework that accommodated every curved geometry allowed to date, but generalized it to include a host of other possibilities and, as a bonus, unified geometry and differential calculus. In this post I will focus on the first part and leave differential calculus for another day.

So let’s start with geometry. I am guessing you are already familiar with Euclidean geometry: it is the geometry you are taught at school. The one where angles in a triangle add up to 180 degrees and parallel lines never cross.

It turns out that the whole of Euclidean geometry can be deduced from a set of statements we call “axioms.” An axiom is supposed to be a sentence that requires no proof. In ancient times, it was regarded as a “self-evident truth.” Nowadays, it is just viewed as an arbitrary rule which could be different. A little like the rules of a game: something you need to start playing, but which does not necessarily reflect reality.

English: Diagram illustrating Euclid's paralle...

English: Diagram illustrating Euclid’s parallel postulate (Photo credit: Wikipedia)

One of the axioms of Euclidean geometry is the “parallel lines” axiom. You have probably heard it at some point or another:

“Parallel lines never cross.”

The thing about the parallel lines axiom is that it seems quite obvious

So obvious that, in fact, many mathematicians were convinced that it shouldn’t be necessary. It should be possible, they argued, to infer it from the rest of the axioms. So they tried and tried, without success.

Finally someone had a great idea. They realized we don’t need to deduce this axiom from the others. All we need to do is create an “alternative geometry” where we change it for something like “parallel lines cross at least once.” This new geometry will be obviously inconsistent: all we need to do is show the inconsistency and though prove our result by reductio ad absurdum.

And so they tried. But they met with a surprise: the new geometry didn’t show any inconsistencies. It was just as functional as Euclidean geometry. It just made some wild predictions: the angles in a triangle, for example, didn’t add up to 180 degrees. Angles between parallel lines changed. But the whole was consistent. In fact, they found out that there were many ways of modifying the “parallel lines” axiom and that all of them gave rise to consistent, although different, geometries.

Three examples of different geometries: Euclid...

Three examples of different geometries: Euclidean, elliptical and hyperbolic geometry. (Photo credit: Wikipedia)

That created quite a dilemma for mathematicians. What should they do? Incorporate these new geometries to the “real” geometry? Discard them as senseless?

At the end, the faction arguing for incorporating them won and the new, alternative geometries became as much a part of geometry as classical Euclidean geometry.

In comes Riemann.

Riemann was born into a world where mathematicians had a number of different geometric systems defined by different groups of axioms. What they lacked was a unified formulation, a more general approach that could encompass all of them, including Euclidean geometry.

What Riemann did was, well, it was genius. It was mind-boggling. He started with something called a “manifold.” A manifold is anything that can be mapped into a set of points. For example, a napkin is a manifold: I can create a mapping of the points of my napkin to a set of numbers (a subset of the Real numbers, in this case.) Bear in mind that Riemann made no assertion about the manifold itself. He didn’t define any straight lines or angles or distances. All of that would come later. All he did was define a mapping.

Then he did something incredibly amazing: he unified geometry and differential calculus. Unfortunately, I cannot explain this here (I will do so in a different post).

We can, however, delve a bit more into what he did later. What he did was redefine the notion of distance. Let’s see this with an example: if you want to know the distance between two points in classical geometry, you draw a little triangle and use Pythagoras’s theorem to calculate it. What Riemann did was expand this definition and allow for something called the “metric tensor” to affect this distance. This simple device allowed him to take into account the curvature of any space, thus seamlessly integrating all non-Euclidean geometries to date.

Mathematical details

Measuring distance in a curved space (Photo credit: Wikipedia)

The beauty of the metric tensor was that it depended on the position, so you could have different curvatures in different regions. This allowed for alternate geometries which were vastly more complex and interesting than any devised so far.

This may seem like a lot of lucubration with very little practical application. In fact, it is quite the opposite. Our universe does not behave according to the rules of Euclidean geometry, but according to the rules of Riemannian geometry. If you launch three satellites into space with lasers making a triangle, the angles won’t add up to 180 degrees. Parallels will cross.

Riemannian geometry was in fact the basis for the general theory of relativity. It is the substratum without which the theory wouldn’t be able to stand. Einstein’s genius was realizing that our space-time’s curvature was in fact being caused by matter. That gravity was, really, nothing but a bending of space itself.

The beauty of mathematics is that it unifies and, with unification, new phenomena emerge. This new phenomena often turn out to have very real consequences we can observe in our daily lives. The beauty of mathematics is that it uncovers a deep truth, a magical truth, by exclusively rational means. It is thought at its purest, its most refined. It is as elevated as an art can get.

Sometimes I think some equations should be displayed in art galleries. Riemann’s definitely deserve to.

Enhanced by Zemanta

A Constant that Contains all the Knowledge in the Universe

Imagine a number that contained all the truths in the universe. A number where each decimal place was extremely condensed knowledge. This number would be some kind of philosopher’s stone, where information would be so refined that no more compression would be possible.

This number exists. It is called Chaitin’s constant and, unfortunately, it cannot be calculated. Even so, some people are attempting to and partially succeeding. Read on to know what it is about.

If you are used to your average high school math, this is going to sound terribly alien. It is a radically different way of thinking about mathematics that started, arguably, when Alan Turing started thinking about computers. The key notion in all this conundrum is information.

Turing at the time of his election to Fellowsh...

Turing at the time of his election to Fellowship of the Royal Society. (Photo credit: Wikipedia)

How do we define information? There are many ways, but an extremely useful one is to choose the size of the smallest possible program to get a certain output. For example, writing the chain of characters


An infinite number of times requires an extremely short program, namely:

Repeat(print a)

So, even though there are infinite characters, the information content of the chain of “a” is extremely low. Similarly, rational numbers also contain very little information in general, since their decimal expression is either finite or periodic.

How about irrational numbers such as pi or the square root of two? One could argue their information content is higher, since they have an infinite number of random decimal figures. However, both numbers are easy to calculate with the appropriate algorithm and therefore have a low information content.

Before we go on, let’s first reflect on how a number with a high information content should look like. First, it should have a random decimal or binary expression. If it didn’t, there would be some kind of rule we could follow to calculate it, therefore allowing us to create a short program to do so. Secondly, there should be no simple program capable of calculating it. If there was, its information content would be low.

In fact, if we want to have a number with an infinite amount of information, it can only be generated by an infinitely long program. Nothing less will do: if the program is finite is length is also finite and thus so is the number’s information content.

Chaitin’s constant does precisely that. It is incompressible information. It is a number with an infinite amount of information that requires an infinitely long program to be calculated. Let me explain how it is defined, though bear in mind this is by no means rigorous. There are plenty of more thorough but less understandable explanations online.

The first concept we need is that of a halting probability, which is the probability that a given program will stop. Determining whether a program will stop is straightforward for some of them: for example, the program that evaluates “2+2” will certainly stop after a finite time.

The Halting Problem

The Halting Problem (Photo credit: pboothe)

However, it turns out it is in general not possible to find a general solution that finds whether a program will halt or not. The reason for this is that, if we could, we would be able to solve every single problem of mathematics, since all of them can be rephrased as a halting problem. For example, take Goldbach’s conjecture that says each even number can be expressed as the sum of two primes. We can put it this way:

“Will the program that tries to find an even number that’s not expressed as two primes ever halt?”

If we solved the halting problem, we would instantly know whether Goldbach’s conjecture is true. Mathematicians would lose their jobs. But mathematicians need not fear, because Gödel’s theorem (which we will visit some other day) prevents this from happening. The halting problem cannot be solved.

Chaitin’s constant can be seen as the probability that a randomly chosen program will stop after a finite time. Imagine, for example, all the “programs” that can be written as a set of three bits (001, 010, 011, etc.). Now imagine only the program “001” stops. What is the probability that a program with 3 bits will stop? It is the same as the probability of drawing the string “001” from a set or, equivalently, of tossing a coin and getting tails, tails, head. That is easy to calculate: the probability is ½ * ½ * ½, since the probability of each coin toss is ½.

Français : Une métaphore de l'Oméga de Chaitin

Français : Une métaphore de l’Oméga de Chaitin (Photo credit: Wikipedia)

If there were more programs that halted, then we would have to add the probabilities of those halting, too, in order to get the total. Since Chaitin’s constant is the sum of the probability of choosing any possible program that halts, that sum extends to infinity.

What is the value of Chaitin’s constant? We know it has to be a number between 0 and 1. Since it cannot be computed or it would solve the halting problem, it has to be algorithmically random, which means there is no way to find a shortcut to calculate it: the shortest program needed to compute a bit of Chaitin’s constant is one bit long.

Even though Chaitin’s constant cannot be computed, Christian Calude and others have actually started compuing it, obtaining the first 64 bits one. Bear in mind, though, that Chaitin’s constant depends on the programming language used, so there are actually a number of Chaitin’s constants with equivalent properties.

Knowing Chaitin’s constant would amount to knowing every single mathematical truth in the universe. That is because knowing the first N bits of Chaitin’s constant would allow us to solve the halting problem for every program up to N bits. Knowing them all would allow us to find a general algorithm that could tell whether a program would halt. And, as I’ve said before, knowing whether a program will halt for every possible program is equivalent to solving every mathematical riddle.

That is why Chaitin’s constant encodes all the (mathematical) knowledge in the universe.

Enhanced by Zemanta