For reasons explained briefly in “The Emergence of Logical Positivism“, the Western intellectual tradition came to the disastrously wrong conclusions that (1) Only science can provide us with valid knowledge, and (2) science is based on observables, unlike religion which is based on unobservables. Furthermore, since qualitative aspects of observables are often subjective, a preference for the objectivity created by measurement was expressed by Lord Kelvin as follows:
When you can measure what you are speaking about, and express it in numbers, you know something about it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science
This idea, that everything worth knowing, can be reduced to numerical measurements, has led Western intellectuals to attempt to measure everything, without concern about whether or not the thing in question is measurable in the first place. As discussed in an earlier post on “Statistics as Rhetoric“, this has led to the creation of fictional numbers – these appear to measure something, but there is nothing present in external reality which corresponds to what these numbers purport to measure. The widespread and commonplace attempts to measure everything in sight have created the misconception expressed in the quote above, which it is convenient to refer to as “Kelvin’s Blunder”: “everything about which we can have knowledge, can be measured”. Attempts to measure the unmeasurable have led to widespread folly. Can we measure the complex and multidimensional quality of intelligence by asking a few questions about math and English? Can we measure love by measuring the pressure per square inch exerted during a hug? After defining such absurd measures, the statistician is asked to endorse the crime by not examining what is being measured, and why it being measured. Instead, he/she should just analyze the numbers which have been produced by the “field expert”.
We have provided many reasons why attempts to measure that which cannot be measured leads to more harm than good in our paper on “Corruption: Measuring the Unmeasurable” Humanomics, Vol. 25, No. 2, pp. 117-126, June 2009. Here we provide a simplified overview of some of the key arguments from the paper.
Before we attempt to measure something, we must have an agreement on what that thing is. There must be something in external reality which can be measured in such a way that all observers who measured it would come to the same conclusion regarding the measure. Regarding corruption, there is substantial disagreement regarding exactly how it should be defined. For example, there is solid evidence of widespread financial corruption and white collar crime which led to the global financial crisis. Banks sold mortgages made deceptively attractive, to borrowers known to be weak and unable to pay, because they knew they could sell these mortgages on face value to unsuspecting victims in the mortgage-based securities (MBS) market. They did this with full knowledge that the lifetime savings of the borrower would be wiped out in a default, and the purchasers of the MBS would be left holding worthless paper. The credit rating agencies collaborated in the fraud by proving high AAA ratings to worthless bonds. The insurance companies collaborating by providing insurance to gambles which were likely to fail, on the strength the Too-Big-To-Fail principle. This led to a financial collapse which required $29 Trillion in bailouts payments by the US Government. This is undoubtedly the single largest corrupt transaction in all of human history. Now we can ask a number of questions regarding this transaction, about how we could QUANTIFY the levels of corruption involved. A simple starting point might the amount paid in bailouts, which may be estimated at $29 Trillion. But this does not measure the losses to the homeowners, thrown out of their homes, the job losses, the disruptions of families, and the hunger and homelessness created, which reached peak levels, never seen in the USA since the second World War. We might want to count the number of people involved in the corrupt transactions, those who participated in making the fraudulent loans, and those who certified them as safe, and those who insured them. Again, we might want to differentiated between people who did so willingly and knowingly from those who were innocent dupes of the system. Without having a clear definition of precisely how we define corruption, it is impossible to say which of the many numbers we could invent to measure it, is the right one.
One way to prevent such fraud with numbers is to require an empirical verification criterion. If I say that the number X measures the quality Q, I must be able to offer a way that anyone can verify this, at least in theory. For example, if I say that the population of Pakistan is XYZ on 30th June 2018, then anyone who went and counted the population on that day should arrive at this same number. We do not require that people actually be able to build a time-machine to actually do this. Rather the test is to provide conceptual clarity regarding the target of measurement — what is it in the external objective reality, which is the same for all observers, which the number is trying to measure? Regarding corruption, we would not find any agreement, and hence we would argue that it is futile to try to measure something which we cannot even define precisely.
A second argument regarding the attempt to measure corruption is the even simpler idea that two numbers cannot be reduced to one without the loss of information. In order to create a clear external target for the corruption measure, suppose we confine attention to mortgages which were sold, which ended up in default. A large number of questions can be raised (and should be raised) about how well this measures financial corruption, but putting these to one side, let us assume that we can get accurate measures of two numbers. For the sake of illustration suppose 1000 mortgagors defaulted, and the average size of the loan default was $5000. Now consider three researchers R1, R2, and R3 who each make the following arguments for their own measures of corruption. Research R1 says that corruption is a human phenomenon. If few people in society are corrupt, then we should count the society as honest. Assuming (and this is a big assumption) that all defaulters were corrupt, I count 1000 as the correct measure of corruption. Researcher R2 says that corruption is really all about having enough money. People are forced to be corrupt because they lack the financial means to survive. In the present example, if people had $5000 each they would not have defaulted. So I count $5000 as the correct level of corruption in the society. Researcher R3 says that what really matters is the cost to society, and this can be measured by the product of 5000 x 1000 = $ 5 Million, and this is the right measure of corruption. Which of them is right?
The answer is that there is no way to settle this argument objectively. Each of the three is right in their own way. We cannot find one number to measure corruption, even in the simplest case where it is a completely quantitative phenomenon which can be captured by two numbers — 1000 defaulters with average default size of $5000. More complex cases where thousands of numbers can be used to characterize something which is measurable but multidimensional, involve massive loss of information when we try to find ONE number to measure them. This explains why Lord Kelvin’s idea, quoted above, is a major blunder. Despite this, the attempt to measure the unmeasurable continues to be made, with devastating consequences. For an illustration in the context of business management, see “Beyond Numbers and Material Rewards“.
The central goal of our online course on “Real Statistics: An Islamic Approach” is to remove blindfolds placed on the eyes of the statistician by the conventional approach. The statistician is just supposed to look at the numbers; disregard where they come from, and who produced the numbers, and for what purpose. So we look at the CPI – corruption perception index — and we calculate the mean, median, mode, create a ranking of countries, and run correlations with other variables to find out the determinant, causes, and correlates of corruption. Our paper on “Corruption: Measuring the Unmeasurable” opposes Lord Kelvin’s Blunder. Instead, we argue that we should look at who is producing these CPI numbers, and why they are producing these numbers. We find that there is 98% correlation between the CPI and GNP per capita on the data set that we examined in the paper. In other words, corruption is just another measure of poverty, while honesty and integrity is a measure of wealth. This is clearly wrong. As we have seen, the single largest corrupt financial transaction in human history took place in the USA, which is among the richest countries in the world. We can look beyond this to ask WHY the CPI was produced, and for what purpose it is being used.
By looking at the dates around which this index was produced, and started being used, we can see that it is related to the increasing concerns with governance which emerged in the 1990s. Given that governance has been a problem since much earlier, the question arises as to why did it become a central concern only in the 1990s. The answer can be found by noting that 1990s saw an increasing awareness the widespread failure of neo-liberal paradigms for development. In “The East Asian Miracle”, even the World Bank (WB) acknowledges that the transformation from agricultural to industrial economies in East Asia was accomplished by policies very much in conflict with WB prescriptions. On the other hand, it was also abundantly clear to all that not one of the countries which followed WB prescriptions prospered as a result. In order to prevent a revolt against dominant financial power structures, which benefit enormously from neoliberal policies, it was necessary to find a scapegoat. Accordingly “governance” was invented, and the failure of WB policy was blamed on corruption, instead of the disastrously poor policies. The “knowledge” that we have about the world, is generated by the “powers” that govern the world, and provide the billions of dollars of funding required to collect the statistics, the create a picture of the world favorable to their interests. See my essay on Michel Foucault: Power/Knowledge for a deeper discussion of this link.
This whole essay illustrates the basic principle of Real Statistics, as contrasted with conventional statistics. Real Statistics rejects the division of labor that the statistician can analyze the data, while the field expert uses the analysis to understand the reality. These two activities cannot be separated. The real arguments are always about underlying realities, and a real statistician must look beyond the numbers to grasp these underlying realities and how they are inevitably distorted by numerical representations. He/She must participate in the arguments being made by using these numbers, instead of being an unwitting bystander. By agreeing to not look deeper, statisticians contribute, without awareness of this fact, to strengthening the power regime by accepting the use of defective numbers, which misrepresent reality. For just one illustration this process, see “GNP as Statistical Rhetoric“.
POSTSCRIPT: For an introduction to the ideas which underlie the online course on “Real Statistics: An Islamic Approach“, see the four part lecture on:
- Real Statistics (1/4) Fundamentals of an Islamic Approach
- Real Statistics (2/4) Teaching Statistics as an act of Worship
- Real Statistics (3/4) Statistics as Rhetoric
- Real Statistics (4/4) The Illusion of Objectivity