tag:blogger.com,1999:blog-72806043508518671842024-03-07T00:04:50.933-08:00Armchair RuminationsArmchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-7280604350851867184.post-36552058436913480572008-10-05T12:05:00.001-07:002008-10-05T14:43:37.023-07:00Is the State Obsolete?I had a very interesting discussion yesterday about whether the concept of the state (i.e., country) is now obsolete. The basic premise is that <a href="http://en.wikipedia.org/wiki/The_World_is_Flat">the world is flat</a>, and that national boundaries are irrelevant in the current global economy. The arguments were roughly along the following lines:<br /><ol><li>Corporations act in ways that benefit people of all countries. The basic unit of society should be the corporation, not the nation. An American country that lays off people in America frees them up to do better, more imaginative, more creative, more cerebral work. The same company, which hires replacements in India, improves the lives of those Indians, who would otherwise have been unable to find work that paid them so well.</li><li>The argument was be taken further: brain-drain is not really a drain at all, because national boundaries don't matter. Thus top brains and talent moving from India to the US is not a concern. It is better to use your brains in the US than to underuse them in India. And India benefits from this: foreign remittances to India are higher than to any other country in the world.<br /></li><li>There is only one country in the world, the USA, which has an inherent culture of innovation and discovery. (Or perhaps two or three others at most, Germany being a possibility.) This is why no innovation happens in India, and cannot happen in India -- because the people, by nature, lack innovativeness.</li><li>India, more than any other place, doesn't deserve nationhood because of the diversity of its people. An Indian feels like a stranger in a different part of his own country. The US feels more like home than India.<br /></li></ol>I didn't agree with these points. My answer yesterday to the question: "What is the point of nations?" was "Bargaining power". Here's a Q & A:<br /><br />Q01: What is the point of nations?<br />Ans: Bargaining power. A nation is nothing more than a collective that bargains in order to increase the standard of living (SoL) for its citizens. It is the same concept as that of a workers' union.<br /><br />Q02: What is the point of nationalism?<br />Ans: The reason a citizen should support his nation (and the concept of nationhood) is that it increases his chances of a better SoL. Nationalism increases a nation's ability to bargain, by increasing the nation's unity.<br /><br />Q03: Then why shouldn't everyone in the world pledge their loyalty to those nations that have the highest chances of improving their citizens' SoL? Specifically, the USA?<br />Ans: If an individual's goal is to increase his SoL, he should indeed attempt to become a citizen of the country most likely to increase its citizens' SoL. The reason this doesn't happen in practice is countries like the USA realize it is not in their best interest, and have laws in place to prevent easy access to citizenship.<br /><br />Q04: Which laws?<br />Ans: To become a citizen, one has to demonstrate both competence (through employability) and American nationalism (through a test and residence). America realizes that notions of the world being flat (in the sense of nonexistent national boundaries) are not in its best interests.<br /><br />Q05: Why is "no boundaries" not in America's best interest?<br />Ans: For Americans to remain prosperous, there needs to be a vastly larger population of non-Americans. There needs to be someone to bargain with, someone to exploit.<br /><br />Q06: Huh?? Why? What do you mean by "exploit"?<br />Ans: American power has many immediate reasons, but it can be traced back to a form of imperialism. America's prosperity relies on the exploitation of non-Americans, just as the prosperity of every other major power throughout history relied on exploitation of other populations. Unless a vast population of non-Americans exists, it will be impossible to use America's bargaining power to acquire various raw materials from them at prices much lower than the cost it takes to extract them. This is not a bad thing; it is what every major country in the world is trying to do, and is what every trader in a market attempts to do on a daily basis. It's just that America is better at it than other nations.<br /><br />Q07: Even if nations are not irrelevant, why don't we stop at Indian states? Why shouldn't Rajasthan, West Bengal and Tamil Nadu be separate countries? Why do we need the whole of India to be one country?<br />Ans: Because larger countries have more bargaining power than smaller ones.<br /><br />Q08: Then why shouldn't India annex more land and become an even bigger country?<br />Ans: If we can, we should. China knows this; that's why China seized Tibet. But we need to make sure the negative consequences of such an action don't outweigh the gains.<br /><br />Q09: Well, the USA can certainly annex more land. Why doesn't it do so?<br />Ans: The fallout from such an action would have an unjustifiable cost for the USA. It is so stable and has such a high SoL that managing a population of unwilling conquerees would lower the overall American SoL. Increasing the American SoL at this point is much more easily accomplished by projection of soft power.Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-2132314978597914222008-09-22T08:50:00.000-07:002008-09-26T17:53:24.914-07:00Van Gogh<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.idoincorporated.com/e-newsletter/images/Q1.07/van_gogh-starry-night2.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px;" src="http://www.idoincorporated.com/e-newsletter/images/Q1.07/van_gogh-starry-night2.jpg" alt="" border="0" /></a><br />Van Gogh's paintings <span style="font-style: italic;">Starry Night</span> and <span style="font-style: italic;">Cafe Terrace at Night</span> stir something deep. My interpretation (which van Gogh probably never intended) is that they are a contrast between the warm, familiar fold of civilization and the wild unkown mystery of the celestial sky. In <span style="font-style: italic;">Starry Night</span>, it is as if the monumental forces lying in the hearts of suns and galaxies have descended onto the hamlet of Saint-Rémy, which is getting ready to tuck in for the night, unaware and unconcerned about the fantastic forces at work in deep space.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.geocities.com/alanalogue/7.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px;" src="http://www.geocities.com/alanalogue/7.jpg" alt="" border="0" /></a>The same sentiment is stirred by <span style="font-style: italic;">Cafe Terrace at Night</span>: the warmth of familiar surroundings and human company contrasted to the unknowns in the surrounding dark streets, and even more, the unknowns up in the sky. I can't decide what I want to be: a diner at the cafe or a predator lurking in the dark alleys, looking at the diners and waiting for one of them to leave that safe haven.Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-26090166080656853192008-06-20T08:37:00.000-07:002008-06-20T08:49:37.516-07:00Religion as a Computational SimplificationWhat is religion, why do we need to have faith, why do we need gods?<br /><br />Life includes a series of decisions. Decisions help us optimize our condition, find a route to another condition that is better, more stable, easier or happier. But the number of minute decisions that need to be made is so large that our built-in computer, the brain, is overwhelmed by the computational requirements.<br /><br />So it takes shortcuts. It categorizes the decisions, pushing some, such as picking up the next spoonful of food or stepping aside to avoid a pothole, into a subconscious decision making queue. Others are not so subconscious but are still routine jobs, like signing your name on a credit card bill or going to work in the morning. Even with these reductions on its computational requirements, the brain would be left with too many significant mid- and long-term decisions.<br /><br />Religion is the knowledge applicable to another subcategory of these remaining decisions. In many cases, it quickly allows us to use the past experience of wise people to determine a course of action when faced with certain decisions. Trying to figure every one of these out for oneself would put too much of a computational burden on the brain. Religion gives quick answers, without always requiring us to think hard.<br /><br />Of course there are still a lot of decisions that can't be addressed by religious knowledge, and which might require individual thinking. But religion helps quite a bit; a lot of right-and-wrong type decisions can be solved quickly by referring to religious knowledge.Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-69218882493008970972008-01-29T08:35:00.000-08:002008-01-29T08:48:11.579-08:00The Nonconservation of CausalityVaguely, this is what the title means: Suppose John is a bad influence on Bob, and Bob robs Dave. Should we say that John is responsible or Bob is? I think it is possible to say that both are.<br /><br />I'm sure legal systems have thought about this sort of thing a lot...Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-7321367617055967402007-05-20T18:45:00.000-07:002007-11-18T18:48:19.970-08:00Dera Sacha Sauda versus The SikhsThe recent violence in Punjab and Haryana over the Dera Sacha Sauda chief's choice of dress highlights one of the most fundamental problems in India. This is a problem which runs deeper than something like corruption or overpopulation (not to play down the importance of those issues).<br /><br />The Sikhs (or anyone else) have no right to tell anyone how to dress. Blasphemy, in any form, is not an offense in any civilized society. Everyone should have the freedom to say and do whatever they please -- as long as it is not designed to cause disturbances. Unfortunately, the Sikhs in Punjab have failed to recognize this.<br /><br />The recent incidents are neither isolated nor unusual. Second year students in colleges think they have the right to rag incoming freshmen. RSS and VHP activists think it is their right to smash the offices of newspapers that publish anything they disagree with. Naga christians think they have the right to chase Hindus out of Nagaland. National governments think it is perfectly fine to imprison and torture anyone who says anything against a minister (an outstanding example: the Emergency of Indira Gandhi). Soldiers think it is normal to torture Kashmiri kids, and kill them if they refuse to cooperate. Muslim organizations think it is their right to serve death sentences on authors who disagree with anything in the Quran. The Naxalites think they can dispense social justice to (maim and kill) anyone they don't like. Marathas think they have the right to prevent non-Marathas from working in Maharashtra. The CPI(M) thought it was within its rights to order its cadres to cut thumbs off villagers who don't vote for the party. Indians everywhere thought they could attack any Sikh in the aftermath of assassination in 1984. The police everywhere think it is their right to thrash and torture everybody in jail cells.<br /><br />This lack of respect for individual civil liberties is characteristic of India. Individuals and organizations suffer from a God complex: "if it is within my power, I have the right to do it". The Dera Sacha Sauda incidents just serve to illustrate a greater malaise.<br /><br />Getting back to the Dera Sacha Sauda affair, police have registered an FIR against the head of the Dera Sacha Sauda. This may be proper procedure when complaints are made against him, but it is surprising that the police is doing nothing about the rioting hordes who mortally threatened Dera members.<br /><br />So, what are civil liberties worth? One of the questions we Indians must ask ourselves is this: "Do we serve our collective national soul better by granting civil liberties to others who disagree with us, or by aggressively enforcing our own opinions?"<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com2tag:blogger.com,1999:blog-7280604350851867184.post-17813174410985521312007-05-04T11:38:00.000-07:002007-11-18T18:48:37.404-08:00The Natures of India and the U.S.A.In the U.S.A., there is a sense that India is on the brink of something like a world takeover and is about to catapult itself into advanced-nation-dom. Many Indians have also started believing that this will be so, without paying attention to the fundamental systemic differences between the natures of the so-called advanced countries and India. This belief is no doubt spurred by the rapid expansion witnessed since economic liberalization in 1991.<br /><br />But I think our pre-1991 economic structure accounts for only part of the backwardness. The rest is due to our ancient social structures. A long time ago, Indians invented a social structure that ensured stability and internal safety and removed much of the uncertainty associated with everyday life. This had its merits, but it also led to a society that is non-confrontational, too scared to assume leadership roles and afraid to innovate if it involves taking risks. Oh, it's easy to come up with counterexamples: in a country of 1.1 billion people, there are bound to be <span style="font-style: italic;">some</span> who do all those things. But the average Indian is more likely to be a sheep than the average American, and less likely to be a lion.<br /><br />Looking at this whole issue through a Dennett-ish Darwinian lens, one can see pseudo-evolutionary forces at work everywhere. Indians are probably among the most inbred people on the planet, and it shows in the number of congenital diseases and the general state of health. Our safety nets, which include nearly guaranteed intra-tribe marriage, seem to have nibbled away at our gene pool over the centuries until we remain a tired and spent population. In social terms we remain "safe", preferring life paths that lead to stability rather than achievement. Removing the bonds of what Gurcharan Das calls the License Raj is only the first step. The important question is, can we shed the bonds of our own degenerative culture?<br /><br />The answer seems to be in the affirmative, as Western influences and the powerful new media wear down cultural barriers and our own Bollywood films encourage us to rebel against ancient socio-cultural mores. Cross-cultural marriages and heterodox life patterns are increasingly taking hold. But in adopting such novelties, is India headed towards a major shark-jump? Will the India of tomorrow be so different that it is not recognizably Indian? I think the answer is yes.<br /><br />The U.S.A., in contrast to India, is founded on principles of evolutionary efficiency. America is not just a country, although it is strongly tied to its real estate. America is a meme, a concept: a country defined by the intelligence and ability of its inhabitants at any given point of time. The inhabitants themselves are less important than what they can contribute to this Amerimeme. An immigrant is only as important as the brains or labour that he or she brings into America; amazingly, this also applies to its citizens. The state gives citizens the opportunity to be useful -- but if they're not, they (and likely, their bloodlines) are doomed to oblivion.<br /><br />India is a little more forgiving. A less-than-important man may, and usually does, father a multitude of offspring, some of whom may end up useful. No doubt this happens in America, too -- but less frequently. America is less forgiving of inefficiency and error than India is.<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com1tag:blogger.com,1999:blog-7280604350851867184.post-89845458081825950632007-04-21T04:21:00.000-07:002007-11-18T18:48:57.340-08:00Causality Versus DeterminismMany attempts to define determinism, the philosophical notion that everything that happens in the universe is pre-ordained or pre-decided, involve the notion of causality. A causal chain or graph of events, driven by the laws of physics, is supposed to explain how determinism can be. In this view, the state of the universe at any time is determined once we have an initial state of the universe and a set of physical laws which allow us to compute the state of the universe at any time. Of course, since we are also part of the universe and are thus subject to its laws, some thinkers construct these arguments from the viewpoint of a hypothetical "demon" residing outside the universe and unaffected by the universe's laws.<br /><br />In what follows, I will argue that the most common notion of causality, based on counterfactual outcomes, is meaningless in a deterministic universe. We may have to adopt a definition of causality which relies on computability <span style="font-style: italic;">within</span> the universe: A causes B if we can start with state A and compute a sequence of state changes induced by the laws of the universe, ending in B.<br /><br /><span style="font-weight: bold;font-size:130%;" >Counterfactual Causality Fails in a Deterministic Universe<br /><span style="font-size:100%;"><br /></span></span>According to the Wikipedia entry on determinism:<br /><blockquote><i>Causal</i> (or <i>nomological</i>) determinism is the thesis that future events are necessitated by past and present events combined with the laws of nature.</blockquote>The Wikipedia entry on Causality has this to say:<br /><blockquote>The philosopher David Lewis notably suggested that all statements about causality can be understood as counterfactual statements. So, for instance, the statement that John's smoking caused his premature death is equivalent to saying that had John not smoked he would not have prematurely died.</blockquote>The incompatibility between determinism and causality is now easy to see: if causality is defined counterfactually, then any event A which occurs before an event B is causally responsible for B. This is because the statement "If A had not occurred, then B would not have occurred" is meaningless in a deterministic universe. "If A had not occurred" is like saying "If 1 equals 2", because determinism says that A occurring is the only possibility. Thus, if A occurs before B, then A is causally responsible for B.<br /><br /><span style="font-weight: bold;font-size:130%;" >Causality as Computation</span><br /><br />Perhaps a modified definition of causality will help take care of this problem. Suppose that, by "A causes B", we mean that a computer within the universe is able to find a chain of applications of the laws of the universe which takes the universe from state A to state B (via some sequence of intermediate events). Then we can say that A causes B. Note that this definition refers to the ability to compute or the ability to understand.<br /><br />The definition is not yet valid, however. What if, given <span style="font-style: italic;">any</span> two events A and B, we can compute such a sequence of intermediate events? Then this definition would be no more useful than the previous one based on counterfactuals. We may have to abandon an attempt to define causality as either true or false (A causes B or A does not cause B) and accept a definition based on degrees of causality. Thus, if the chain of intermediate events going from A to B is long, we say the relationship is "less causal", and if it is short, we say it is "more causal".<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-52214733524088769762007-04-15T05:23:00.000-07:002007-11-18T18:49:12.670-08:00A Mechanistic View of EthicsWhat is Ethics? What is the foundation for ethics? Do we need religion for ethics? Can a mechanical (soulless, purely physics-driven) being have ethics? How can ethics be derived in a deterministic universe without free will? The Optimization viewpoint. Can there be an Ultimate Logical Justification for any system of ethics?<br /><br /><span style="font-weight: bold;font-size:130%;" >Ethics Without Soul</span><br /><br />There has recently been a lot of controversy about Atheist ethics. Ethical systems have, traditionally, been tied to religion. Since religions became widespread, the primary motivation for ethical behaviour has been religious. Each religion has its own ethical system. Almost all religions specify carrot-and-stick reasons for behaving ethically. In the Abrahamic religions, heaven and hell are the carrot and the stick. In Hinduism, nirvana and demotion in the "highness" of being are the carrot and the stick. Not all religions insist on the existence of one universal "God", but Atheists often remain unattached to any of the usual religions in addition to a lack of belief in a "God". The question then arises: Can Atheists behave ethically?<br /><br />More generally, the question can be posed for any <b>mechanistic system</b> (a system ruled only by the laws of physics and not by any agent, such as a "soul", connected to religion). Mechanistic systems include humans, other organisms, robots, and any other objects or phenomena. (Whether humans are mechanistic is a subject of much debate; see <a href="http://armchairruminations.blogspot.com/2006/12/strong-and-weak-artificial-intelligence.html">Strong and Weak Artificial Intelligence</a> and <a href="http://armchairruminations.blogspot.com/2007/01/gdel-penrose-and-artificial.html">Gödel, Penrose and Artificial Intelligence -- Simplified</a>.) What does ethics mean for a mechanistic system?<br /><br /><span style="font-weight: bold;font-size:130%;" >The Goal of Ethics</span><br /><br />I think ethics can be viewed as a mechanism for <span style="font-weight: bold;">preservation or proliferation of complexity</span>. Complexity is precious; the entropy grindstone is constantly trying to destroy it (the second law of thermodynamics). Every ethical principle we have can be seen as ultimately for complexity. Here are some examples.<br /><br />For example, we prize human life over that of all other animals. This is consistent with complexity preservation: humans are more complex than other animals. We think killing an animal for no reason is unethical; we feel no such thing about smashing a rock. This is also consistent with complexity preservation: an animal is more complex than a rock.<br /><br />A lot of things are not directly connected to complexity preservation, but come about because we need simple rules of thumb that we can follow easily. Lying is considered unethical. In the long term, this helps preserve social order and thus helps preserve the human species.<br /><br />Thus mechanistic systems can have ethical behaviour - behaviour which eventually tends to preserve or increase complexity. Atheists can be as ethical as anyone else, as can a robot, as long as their actions are directed towards optimizing complexity.<br /><br />Thus we have converted the problem of constructing ethical systems to an optimization problem. The objective function (which we are trying to maximize) is overall complexity. Ethics can now be viewed as rules of behaviour following whom tends to increase complexity.<br /><br /><span style="font-weight: bold;font-size:130%;" >Our Ethical Principles</span><br /><br />So this tells us what ethics is about, and what ethics aims to do. But it still doesn't tell us how a mechanistic individual should develop his/her sense of ethics. A person can hardly be expected to think of some far-off big-picture complexity goal when deciding what constitutes good ethics. How can the above definition be made practical?<br /><br />First, by recognizing what the eventual goal of ethics is, we have converted the construction of ethical principles into an optimization problem. This is a good first step, since we now know what it is we are trying to do when we talk about acting ethically.<br /><br />Our solution to the optimization problem does not always rely on the objective function of complexity, but rather relies on the observation that various human institutions (societies, religions, legal systems) have already come up with rules of thumb for this optimization. Once we recognize this, we use our judgment to decide which of the existing rules are relevant to overall preservation of complexity and adopt an ethical system based on these rules. This solution may not be perfect, but it is more important that the ethical rules be easy to remember and follow - what use is a perfect but unintelligible and impractical rule? It is preferable, I think, to find simple and general rules, and avoid special cases and exceptions as much as possible.<br /><br />What's more, once we recognize this as a valid scheme for the generation of ethical principles, we can free ourselves from the past. Faced with a new situation, we can find ethical rules tailored to the new situation, rather than trying to search for rules buried in existing religious systems that are applicable. A religious system may be able to help, but the effort of trying to reconcile religion with the new situation is often not worth it.<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-73003569452346648992007-04-14T20:27:00.000-07:002007-11-18T18:49:27.000-08:00Is Our Physics the only Physics?We have a specific way of perceiving things. For example, our mind perceives the world through a four-dimensional model: 3 spatial dimensions, and one (unidirectional) time dimension. But is this the only way the world around us can be perceived?<br /><br />It is clear that, as long as there is a one-to-one mapping between one representation and another, any two representations of any piece of information are equivalent. For example, it does not matter whether we store a position in polar or Cartesian coordinates - because we have a one-to-one map from one to the other.<br /><br />So, imagine that we meet an alien species. Would they necessarily have a unit of distance? Could it be that, instead of (x, y, z, t), they perceive (tx, ty, tz, t^3)? Their unit of measurement would then have distance and time entangled together. They might say, "walk for 125 cube-seconds" (equivalent to us saying "walk for 5 seconds"). Our statement "the car is 10 kilometres away and the time now is 125 seconds" would translate to "the car is 50 km-seconds away". Is there a logical reason why every species should perceive in the same units that we do? Maybe not!<br /><br />This needn't be restricted just to distance and time. A species might perceive taste and colour together, or even distance confounded with emotional state. "That's red-sweet, my friend, but it's happy-far!"<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-64148501163553182172007-04-12T16:07:00.000-07:002007-11-18T18:49:49.346-08:00The Next Big ThingWhat's the Next Big Thing in technology going to be? By big, I mean something revolutionary - like the World Wide Web, or at least (a little lower on the rungs) like social networking.<br /><br />I think it's going to be integration of electronic devices into the human body. We already have scientists working on:<br /><br />Pacemakers (already done!)<br /><a href="http://sciencentral.com/articles/view.php3?type=article&article_id=218392904">A chip that improves eyesight.</a><br />Artificially enhanced minds: <a href="http://www.popsci.com/popsci/printerfriendly/science/0e54d952c97b1110vgnvcm1000004eecbccdrcrd.html">here</a> and <a href="http://www.newscientist.com/article/dn8902-chip-ramps-up-neurontocomputer-communication.html">here</a>.<br />Thought-controlled computers: <a href="http://www.sciencedaily.com/releases/2002/06/020618073233.htm">here</a> and <a href="http://www.newscientist.com/article.ns?id=dn8826">here.</a><br /><br />I think this trend will continue, and within a few decades human-computer hybrids will be widespread.<br /><br />What then? The scary thing about this is that it will mean the rich are suddenly fundamentally superior. The biological randomness that levels the playing field somewhat, say by making a poor person smart or strong, will be lost. Those who are born with the most money can be the smartest <span style="font-style: italic;">and</span> the strongest. Which means they'll make even more money. Which will let them buy even more hardware to become even stronger and smarter. And so on.<br /><br />Will those who are poor at the start of this race be doomed?<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-82560834734512097502007-01-24T06:22:00.000-08:002007-11-18T18:50:04.176-08:00Gödel, Penrose and Artificial Intelligence -- SimplifiedThe mathematician Kurt Gödel became famous for his incompleteness theorems, published in 1931. Gödel proved a theorem which implied that, given any consistent formal system of axioms, there exists a true statement which cannot be proved using those axioms. This statement is called a Gödel sentence for that formal system.<br /><br />Surprisingly, some people -- notably, the mathematician and physicist Roger Penrose (in his books, <span style="font-style: italic;">The Emperor's New Mind</span> and <span style="font-style: italic;">Shadows of the Mind</span>) -- have used this to claim that human intelligence cannot be an algorithm.<br /><br /><span style="font-weight: bold;">Proof:</span> How does this claim work? Loosely speaking, algorithms correspond to formal systems. Suppose, then, that we have a mathematician Mr. A, who has studied Gödel's theorems. Assume that Mr. A, a human, is an algorithm -- and hence a formal system. Find a Gödel sentence for the formal system consisting of Mr. A. Now, Mr. A has studied Gödel's theorems, so he knows that a Gödel sentence is true and can prove it, since this proof is precisely what he studied. Thus the sentence is provable using Mr. A's formal system. But this is a contradiction since a Gödel sentence, by definition, cannot be proved using the axioms of Mr. A's formal system. Thus, Mr. A can do something -- prove a Gödel sentence -- which no formal system should be able to do. Thus the statement that Mr. A is a formal system is false.<br /><br /><span style="font-weight: bold;">Objection 1:</span> Proponents of AI have criticized this argument. The most common criticism is the following: look at Gödel's theorem above again. It holds only for <span style="font-style: italic;">consistent</span> formal systems. Thus, Mr. A would have to be a consistent formal system for Penrose's argument to make sense. But who says humans are consistent? Even mathematicians like Mr. A may contradict themselves sometimes, and so are not necessarily consistent. You would first have to prove that humans are consistent for this argument to work. (Indeed, Penrose does spend considerable effort trying to prove just this, but it has not convinced his detractors.)<br /><br /><span style="font-weight: bold;">Objection 2:</span> I think that there is a stronger reason why Penrose's argument fails. The problem is in the sentence "Mr. A has studied Gödel's theorems, so he knows that a Gödel sentence is true and can prove it", taken from my simplified version of Penrose's proof above. This statement has two interpretations. It is true -- only when interpreted correctly. Its two interpretations are so similar that we naturally confuse them. Here are the interpretations:<br /><ol><li>If Mr. A is given a sentence and is told that it is a Gödel sentence, then he knows that it is true</li><li>If Mr. A is given <span style="font-style: italic;">any</span> sentence, he can recognize whether it is a Gödel sentence, and if it is he knows that it is true.</li></ol>The first interpretation is true. The second is not necessarily true -- Mr. A may have no way of identifying that a particular sentence is a Gödel sentence for his own formal system. Thus, contrary to Penrose's claim, it is quite possible that Mr. A is handed a Gödel sentence for his formal system, and has no way to prove or disprove it -- even if it is true. This is because he does not know whether the sentence is a Gödel sentence for his system.<br /><br />I believe Objection 2 provides a much stronger refutation of Penrose's argument than Objection 1. See <a href="http://armchairruminations.blogspot.com/2006/12/gdels-theorem-and-artificial.html">here</a> for a more detailed discussion of this issue.<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-10926923284285737672006-12-23T13:04:00.001-08:002007-11-18T18:50:19.615-08:00Strong and Weak Artificial IntelligenceArtificial Intelligence (AI) is almost an umbrella term today. Different people use it to refer to different things, and all of the uses taken together cover a lot of ground. Image processing, pattern recognition, various types of automated statistical analysis and syntactic reasoning have all been called artificial intelligence.<br /><br />The AI of this article refers to the ideal of creating a computer with human-like behaviour or consciousness. That last sentence is already a loaded one. To some people, creating human-like behaviour is the same as creating human-like consciousness. Others argue that behaviour and consciousness are fundamentally different. The two are called, respectively, Weak AI and Strong AI.<br /><br />Weak AI refers to the ideal of creating, via artificial means (artificial meaning demonstrably algorithmic, for instance via a program on a computer), a set of behaviours which are indistinguishable from human behaviour.<br /><br />Strong AI refers to the notion that the human mind is in fact algorithmic. Not only can it be simulated using an algorithm, it <span style="font-style: italic;">is</span> an algorithm.<br /><br />Distinguishing between Weak AI and Strong AI can be hard. Although they appear to be different, the argument goes that if something behaves exactly like a human, then it <span style="font-style: italic;">is</span> human, at least mentally. This may seem counterintuitive at first, but the crucial condition is that it behave like a human in all aspects. If this argument is accepted, then simulating something that behaves like a human is the same thing as creating a human. To understand this point of view, it helps to try to identify the difference between a "true" human and a "simulated" human from the mental perspective. Is there any aspect of the mentality of humans that cannot be simulated, which does not manifest in any form of behaviour? That is, is there anything about the mentality of humans that is not "simulatable", even if every part of the behaviour is simulatable?<br /><br />Opponents of the equality of Strong AI and Weak AI use arguments based essentially on the philosophical notion of <span style="font-style: italic;">qualia</span>, or unique individual perceptions. For example, an organism experiencing pain has a unique, unified experience of the sensation. Opponents of the equality essentially claim that such an experience or <span style="font-style: italic;">quale</span> cannot be simulated on a computer, even if the behaviour associated with the experience can.<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com11tag:blogger.com,1999:blog-7280604350851867184.post-54729323571352690662006-12-22T15:01:00.000-08:002007-11-18T18:50:39.398-08:00Gödel's Theorem and Artificial IntelligenceIn 1931, the mathematician Kurt Gödel published a paper that proved his famous incompleteness theorems. The first of these theorems has the following as one of its consequences: in any consistent axiomatic formal system rich enough to include arithmetic, there are true statements that cannot be proven as theorems. That is, there are statements which are true, but whose truth cannot be determined by any algorithm starting only from the axioms of the system.<br /><br />Gödel's theorem has had a lot of consequences, though it has not affected the development of most fields of mathematics or science. The theorem has been used by various people to claim various things, including that the Bible is incomplete, that God doesn't exist, and that God must exist. This result has also been used by some people (most notably the mathematical physicist Roger Penrose) to claim that both <a href="http://armchairruminations.blogspot.com/2006/12/strong-and-weak-artificial-intelligence.html">Strong AI and Weak AI</a> are impossible. I find this claim very interesting, and the arguments that are used are not easy to refute.<br /><br />I think the arguments are so hard to refute because the natural languages we converse in (English, in this case) are sometimes not nuanced enough to clearly express our thoughts. We use the same phrase to mean two different things and confuse ourselves. The following attempts to address this by highlighting a potential flaw in Penrose's argument. The flaw arises because Penrose fails to distinguish between <span style="font-style: italic;">the knowledge that there exists an object having a certain property</span> from <span style="font-style: italic;">the knowledge of an object having that property</span>. That is, he fails to recognize that we may believe that an object with the property exists, without knowing what the object is.<br /><br />Gödel's theorems are notoriously hard to grasp. Part of the reason is the unfamiliarity of the setting in which the theorems are developed (first order languages). Luckily, there is a far more familiar setting in which Gödel's theorems come to play under a different guise: that of algorithms and programs, which many people are familiar with today. So let us understand an analogue of Gödel's first theorem in the setting of algorithms, and use this setting to make our arguments, instead of the mathematical logic setting Gödel originally used.<br /><br /><span style="font-weight: bold;">Algorithms and the Halting Problem</span><br /><br />Rather than try to define exactly what an algorithm is, I assume that it is well-defined, and note that any algorithm can be encoded as a finite string. An algorithm can have an input and an output; if A is an algorithm with 3 inputs i1, i2, i3, then we use the notation o = A(i1, i2, i3) to indicate that o is the output of the algorithm when executed with the inputs i1, i2, i3. An algorithm can have any number of inputs, and the number of inputs itself needn't be fixed. That is, A could have 3, 4, 66, 1,003,478, or any other number of inputs.<br /><br />We all know that algorithms execute in a series of "steps", and any program executes a finite number of steps before it halts. We also know that programs sometimes don't halt at all -- an operating system hang, for example, a loop such as "<span style="font-family:courier new;">while(1){}</span>" in C, or a program written to halt when it finds an odd number divisible by 2. The <span style="font-weight: bold;">Halting Problem</span> refers to the following question. Is there any algorithm A, which, given another algorithm B as input, halts and returns 1 if B halts, and halts and returns 0 if B doesn't halt? That is, A(B) = 0 if B doesn't halt, A(B) = 1 if B does halt. Important points to note:<br /><ol><li>A(B) must halt -- after a finite number of steps, it must stop.</li><li>The answer it returns (0 or 1) must be correct.</li><li>A() must be able to do 1. and 2. above for <span style="font-style: italic;">any</span> program B.</li></ol>The answer to this question is known; there cannot exist any such algorithm A.<br /><br /><span style="font-weight: bold;">HL1</span>: For any algorithm A, there is an algorithm, call it G(A), such that either A doesn't halt on input G(A), or A(G(A)) is wrong (0 if G(A) halts, 1 if G(A) doesn't halt).<br /><br />We write G(A) instead of just G to emphasize that G(A) depends on A.<br /><br />Now suppose we insist on considering only algorithms A which either halt with a correct answer (they halt only when they have determined beyond doubt whether their input halts or not), or don't halt at all. These algorithms never return a wrong answer. Let us call these <span style="font-weight: bold;">sound</span><span style="font-weight: bold;"> algorithms</span>. Note that a sound algorithm never answers incorrectly, no matter what its input. It can be shown that:<br /><br /><span style="font-weight: bold;">HL2:</span> For every sound algorithm A, there exists an algorithm G(A) such that neither G(A) nor A(G(A)) halts.<br /><br />This result is the analogue of Gödel's first theorem in the setting of algorithms. Note the following:<br /><ol><li>A is a sound algorithm.</li><li>G(A) doesn't halt; this is a fact.</li><li>However, A cannot determine the above fact; A(G(A)) does not halt.<br /></li></ol>We are now equipped to state the usual objection to AI based on this analogue of Gödel's theorem.<br /><br /><span style="font-weight: bold;">The Halting Problem and AI</span><br /><br />The usual objection to the possibility of Strong and Weak AI based on the halting problem argument goes as follows.<br /><ol><li>Suppose I am a <span style="font-style: italic;">sound</span> algorithm, say A.<br /></li><li>There is an algorithm G(A) which does not halt such that A(G(A)) does not halt.<br /></li><li>However, I know with certainty that G(A) does not halt.</li><li>Therefore, since I am A, A knows with certainty that G(A) does not halt, and will perform the following:</li><ol><li>If the input B is not G(A), A can determine whether B halts in a finite number of steps; A does so and returns the correct answer.</li><li>If the input is G(A), simply return 0 (for "does not halt"). This is correct.</li></ol><li>Thus:</li><ol><li>A is sound; we have not violated this assumption.<br /></li><li>A(G(A)) does halt.<br /></li></ol><li>So A(G(A)) does not halt (from 2. above) and A(G(A)) does halt (from 5.2. above), a contradiction.</li><li>Therefore 1. must be an invalid assumption.<br /></li></ol>Since the only assumption in the above is that I am a sound algorithm, and we have a contradiction, the assumption must be false; that is, I cannot be a sound algorithm.<br /><br />This "proof" is slightly blurry in steps 4.1 and 4.2, in the sense that G(A) may not be unique. There will be many algorithms satisfying the requirements for G(A). However, this is a non-issue: we could have considered G(A) to be the class of all algorithms satisfying those requirements and used set membership instead of equality, and this issue would have disappeared.<br /><br /><span style="font-weight: bold;">Disproof</span><br /><br />This "proof" conceals several points, which we deal with sequentially.<br /><br /><span style="font-weight: bold;">Objection 1.</span> The first objection is to Step 1 in the proof; this is the most common objection to the proof. Why should we suppose that we are sound algorithms? We may be unsound algorithms. Recall that in the first statement of the halting lemma (HL1), we don't know whether G(A) halts or not. So if we are an unsound algorithm A, we don't know whether our G(A) halts or not, and the proof doesn't go through at all. Penrose does try to defend the assumption that we are sound algorithms.<br /><br /><span style="font-weight: bold;">Objection 2.</span> To move on to the next objection, suppose we <span style="font-style: italic;">grant</span> that if we are algorithms, we would have to be sound algorithms. Is the proof correct in this case? This second objection has to do with what A really is. In steps 4.1 and 4.2 of the proof, we would appear to be modifying the algorithm A itself. Since we assumed "I am A", such a modification may not be permissible. In any case, if A is modified to A', then we need to be concerned with G(A'), not with G(A) any longer. However, this objection is weakened by the chance that steps 4.1 and 4.2 could already be part of A, without any modification. That is, the behaviour of A is to check whether its input is G(A) or not, and base its actions on that test.<br /><br /><span style="font-weight: bold;">Objection 3.</span> Next, we grant that A already includes 4.1 and 4.2 and as a consequence these steps do not constitute a modification to the algorithm. This brings us to what I think is the most crucial flaw in the proof. The proof involves the test: "if the input B is G(A)". A crucial question is, would A be able to recognize G(A)? A knows that G(A) exists, but does not know <span style="font-style: italic;">what</span><span style="font-style: italic;"> G(A) is</span>. The knowledge of the existence of G(A) does not enable A to test whether B = G(A). Note also that, as noted below the proof, G(A) is not unique. Thus even if A could find one G(A) and compare B against it, this would not suffice. A would have to compare B against all possible G(A)s -- this could be an infinite set. Thus A would have to determine using some clever method (not simply by comparing against candidates for G(A)) whether B is a valid candidate for G(A). If we <span style="font-style: italic;">assume</span> that A can do this in a finite number of steps, we now have two assumptions in the proof, and the contradiction at the end would only imply that any one of them could be wrong.<br /><br /><span style="font-weight: bold;">Conclusion<br /><span style="font-weight: bold;"><span style="font-weight: bold;"><br /></span></span></span>Several people have published various versions of this proof, but if it is the proof in the above, it would seem that the fallacy outlined in the third objection settles the issue. As pointed out at the beginning of this article, it is the free form of the English language that makes it so hard to pin down what is really going on. When we say we know that G(A) doesn't halt, we imagine we also know <span style="font-style: italic;">what G(A) is</span> -- this is not the case here. This distinction, between knowing that "there is a G(A) such that G(A) and A(G(A)) don't halt" and knowing that "G(A) doesn't halt", is the crucial one. Weak and Strong AI are still very much alive!<span style="font-weight: bold;"><span style="font-weight: bold;"><br /></span></span><br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-27844997886901607302006-12-19T20:37:00.000-08:002007-11-18T18:50:57.820-08:00The Chinese Room Experiment - Part IThe Chinese Room Experiment is a thought experiment building on the Turing Test put forward by the philosopher John R. Searle.<br /><br />The intention of the thought experiment is to demonstrate that the hypothesis of Strong Artificial Intelligence (Strong AI), which claims that the human mind is an algorithm, is wrong. The Strong AI argument says that every human mental process <span style="font-style: italic;">is</span> algorithmic, that is, it follows a predefined sequence of steps. This appears to be different from the Weak AI hypothesis, which claims that every human mental process can be <span style="font-style: italic;">simulated</span> on a computer, but that the human mind itself is not an algorithm. Some argue that mental processes are essentially particular behaviours (behaviouralism) or a way of looking at physical processes (functionalism), and as a consequence there is no significant distinction between the Strong AI and Weak AI hypotheses. (Read about <a href="http://armchairruminations.blogspot.com/2006/12/strong-and-weak-artificial-intelligence.html">Strong and Weak AI</a>.)<br /><br />The Chinese Room Argument asks one to imagine that there is a native English speaker, Steve, sitting in a closed room with two windows. Steve knows nothing of Chinese. In the room is a book containing detailed information on how to respond to any sentence in Chinese. Outside the room is Wong, a native Chinese speaker. Steve receives a sentence in Chinese from Wong via the input window, consults the book, and responds in Chinese at the output window. Steve carries on such a conversation with Wong without understanding either the input, the output, or the logic behind the exchange.<br /><br />Searle claims that to Wong, Steve would appear to "know", or "understand", Chinese. But Steve doesn't. He is simply following an algorithm; he has no clue what any of the Chinese exchange means. Thus, according to Searle, no algorithm constitutes <span style="font-style: italic;">understanding</span> or <span style="font-style: italic;">consciousness</span>.<br /><br /><span style="font-weight: bold;">Objections to the Chinese Room Argument<br /><span style="font-weight: bold;"><span style="font-weight: bold;"><span style="font-weight: bold;"><br /></span></span></span></span>There are several answers to Searle's Chinese Room Experiment from people claiming that it does <span style="font-style: italic;">not</span> prove the impossibility of Strong AI. Here are some of them:<span style="font-weight: bold;"><span style="font-weight: bold;"><span style="font-weight: bold;"><span style="font-weight: bold;"><br /></span></span></span></span><ol><li><span style="font-weight: bold;">The Systems Reply<br /></span></li><ol><li><span style="font-weight: bold;">Objection</span> This objection says that, while Steve does not understand Chinese, the system consisting of Steve and the book does. This can be viewed as a single, larger entity which does understand Chinese. <span style="font-weight: bold;"></span></li><li><span style="font-weight: bold;">Searle's Reply</span> Searle replies to this objection by suggesting a modification of the experiment in which Steve memorizes the entire translation book and steps out of the room, talking face to face with Wong. Steve still does not understand Chinese, says Searle. He is still applying the rules without any understanding of Chinese.</li><li><span style="font-weight: bold;">Rejoinder</span> The problem with Searle's reply is that it invites us to think of all of Steve's memory as an integral part of his consciousness or ego, while in this modification to the experiment, Steve is using his memory as if it were just separate storage, no different than copying the book onto his forearm. Steve and his memory, taken together as a system, do understand Chinese.<br /></li></ol><li><span style="font-weight: bold;">The Complexity Reply</span></li><ol><li><span style="font-weight: bold;">Objection </span>This objection, due to Daniel Dennett, says that it is not easy to duplicate consciousness without being extremely complex. Ignoring this complexity in the Chinese Room Experiment fools our intuition into thinking the Steve+Book combination is ignorant of Chinese, since we think the book is "just a book". In fact, if we considered the complexity of the algorithm required to converse in Chinese, we would be forced to conclude that the "book" is actually complex enough to be considered conscious. (The notion of a book being conscious may seem ridiculous, but this really refers to the algorithm contained in the book, not the physical book itself.)</li><li><span style="font-weight: bold;">Searle's Reply</span> Searle interprets Dennett's objection as the statement "You can't have a book like that", and goes on to say that the whole point of thought experiments is to imagine a situation that is conceivable, even if it we don't know the details of how to set it up. He says Dennett is essentially denying the idea behind thought experiments.<br /></li><li><span style="font-weight: bold;">Rejoinder</span> Dennett is not saying it is impossible to have a book that complex; he is saying anything that complex is already conscious and aware. If Searle insists on having a thought experiment where a book is complex enough to converse in Chinese but is not conscious, he is assuming too much and is begging the question.</li></ol></ol>More information on this interesting topic can be found in the following books:<br /><br />Consciousness Explained - by Daniel Dennett<br />The Mystery of Consciousness - by John R. Searle<br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com0tag:blogger.com,1999:blog-7280604350851867184.post-35336521319815126912006-12-14T17:09:00.000-08:002007-11-18T18:51:13.267-08:00Reservations - The Right Wayby "Armchair Guy"<br /><br />Reservations are back in the news, and have been for a while. The Congress government has proved resolute and determined to implement reservations in sweeping steps. There are multiple consequences, including nationwide protests, accusations of a sacrifice of merit, concerns about the impact on the economy should reservations be approved for the private sector, increased polarization and mutual distrust among various socioeconomic classes.<br /><br />The reason given for reservations is that in the current socioeconomic milieu, different categories of people face different challenges in obtaining education and employment. The social and economic obstacles are hypothesized to be so large that, even if education assistance is substituted for reservations, the impact would not be sufficient to ensure sustainable overall equality.<br /><br /><span style="font-weight: bold;">Multiple Index Related Affirmative Action (MIRAA)</span><br /><br />One question that arises often is, why only caste? It would appear that the optimal way to ensure equality would be to use a basket of indicators including caste, gender, economic status etc. One such basket, named <a href="http://www.sabrang.com/cc/archive/2006/june06/report3.html">Multiple Index Related Affirmative Action (MIRAA)</a>, has been suggested by Prof. Purushottam Agrawal. The argument for using caste alone is that caste is the biggest indicator of underdevelopment. Indices such as MIRAA could certainly be more effective in improving the condition of people than caste alone, since they would allow reservations to be effectively targeted at the people who need them the most.<br /><br />To understand MIRAA, we first consider the basket of socioeconomic indicators it suggests. The primary considerations when choosing such indicators should be as follows:<br /><ol><li>The indicators should be indicative of education and employment levels</li><li>Information on the indicators should be readily available<br /></li></ol>The indicators considered by Prof. Agrawal under MIRAA are the following:<br /><ol><li>Caste/Tribe</li><li>Gender</li><li>Economic status</li><li>Kind of schooling received</li><li>Region where candidate spent formative years</li><li>Educational status of parents/family<br /></li></ol>Each candidate is awarded 0 to 5 points based on his/her status on each of these indicators, for a maximum of 30 points. This then forms 30% of the score used by any institution to determine admissions.<br /><br /><span style="font-weight: bold;">Debating MIRAA</span><br /><br />This system is already being debated on <a href="http://www.tehelka.com/story_main18.asp?filename=Fe060306Quota_debate.asp">Tehelka</a>. Praise for the system includes the fact that is is proven: Jawaharlal Nehru University has used it successfully in the past. The system is a self-organizing score in the sense that it targets the right sections of society in a manageable way. The need to target the right people is mentioned by several readers on the Tehelka debate. The system also appears to balance the needs of the group and the rights of the individual. Put another way, 70% of the final admission score is "merit" based.<br /><br />Criticisms include one from Amit Sen Gupta, <a href="http://www.tehelka.com/story_main18.asp?filename=Ne052706Shining_India.asp">another commentator on Tehelka</a>, who believes that:<blockquote><span style=";font-family:Arial,Helvetica,sans-serif;font-size:85%;" >Targeting of affirmative action to a section within “backward” castes will be used as a powerful tool to deny the benefits to as many as possible<blockquote></blockquote></span></blockquote>and that the system would be a non-starter on a nation-wide scale. Some readers on Tehelka also expressed concerns about the exact weights given to various indicators.<br /><br />The points raised by Mr. Gupta bear thinking about. One strength of MIRAA is that it is a single transparently computable score. This is good for scalability. MIRAA also does not explicitly target a section within backward castes. It targets those who are suffering the most; as an implicit consequence, it will target backward castes. Within backward castes, it would target specific sections, but this is still implicit. The system as a whole remains simple, based on a single score, and thus not prone to overly high levels of manipulation.<br /><br />The unstated but most contentious issue is likely the low overall weight given to caste/tribe - just 5 points out of 30, or in the bigger picture, just 5% of the total candidate score. Resolving this bone of contention is crucial; most of the difficulties with MIRAA are likely to be about the relative weighting of the indices and about the total percentage of the MIRAA score included in the total candidate score.<br /><br />Objections by other readers serve to strengthen this assertion. The exact weights used to compute the score here were selected based on the individual reasoning, personal experience, or personal preferences of a person or some persons. The 30% number was arrived at the same way.<br /><br />In a word, MIRAA as it stands today is a <span style="font-style: italic;">subjective</span> system.<br /><br /><span style="font-weight: bold;">Making MIRAA Objective: the Modified MIRAA Score</span><br /><br />Turning MIRAA into an objective system requires only a little tweaking of the system itself. It would, in addition, involve some survey sampling and statistical analysis.<br /><br />To understand how to make MIRAA an objective system consider what we mean when we say that a person belonging to a certain category, say an SC candidate, is at a disadvantage compared to a forward caste (FC) candidate.<br /><br />An objective way of defining the amount of disadvantage is the following. In an examination, suppose the average SC candidate scores 12% less than the average FC candidate. Then the SC candidate is at a disadvantage of 12 percentage points compared to the FC candidate. In the above situation, the SC candidate should get a MIRAA score of 12. This is the correct score because it neutralizes the real disadvantage the average SC candidate has relative to the FC candidate. It is objective because the score is completely data driven; personal opinions don't come into the picture. The data and methods used to establish the actual disadvantage would be a matter of public record.<br /><br />The MIRAA set of indices can be used to refine the above further. For example, SC women may, on the average, score 18% less than FC men, while the difference for SC men may be 10%. SC women should get a MIRAA score of 18, while SC men should get a MIRAA score of 10.<br /><br />The same system can be extended to include all 6 variables. In this modified MIRAA system, there is no artificial percentage attached to group needs, such as the 30% in the original MIRAA. Their modified MIRAA score is is simply added to their score in the entrance exam to determine the candidate's final score. This final score may add up to more than 100, but that is not a problem if rank (based on the final score) is used to determine admission.<br /><br />The modified MIRAA system handles the balance of merit and group needs in a more correct way by restoring to each candidate exactly the amount of merit that he/she was deprived of by the socioeconomic system.<br /><br /><span style="font-weight: bold;">Potential Drawbacks of the Modified MIRAA Plan</span><br /><br />This method appears to have a drawbacks as well.<br /><br />First, it appears to reward poor performance. The strata that perform the worst would have the highest MIRAA score. Thus it could be argued that this system may actually encourage poor performance. This objection is not valid in reality however. A counterpoint is that, within each stratum, candidates are selected by fair competition according to merit. Thus there is a strong incentive for each candidate to perform higher, and those who perform lower within each stratum would fail to obtain seats.<br /><br />The second objection is that there is no single standardized exam in India (analogous to the SAT in the USA) on which the difference in score between different strata could be evaluated. This objection can be resolved by using statistical methods (such as "grading on a curve") to normalize the scholastic achievements in different educational boards. Alternatively, if it is felt that there are fundamentally different categories of examinations and the score should be different in each, several categories of exams could be created, with a different table of modified MIRAA scores for each.<br /><br /><span style="font-weight: bold;">Discussion and Conclusions<br /><span style="font-weight: bold;"><br /></span></span>The existing reservation system does not necessarily get resources to those who need them the most; however, some sort of assistance must be provided to those who have historically suffered from socioeconomic discrimination. Prof. Agrawal's suggestion of MIRAA takes 6 important indices, as well as merit, into account when computing a score, and is simple enough to be implemented transparently. If implemented, it would be instrumental in giving specific socioeconomic strata of people the assistance they need. However, MIRAA as it stands today faces some objections that can be traced back to the subjective origins of its scoring system.<br /><br />A "Modified MIRAA" score is proposed that achieves the same objectives as the original MIRAA system but eliminates the subjectivity of the score, potentially increasing its acceptability. The Modified MIRAA is also perfectly <span style="font-style: italic;">fair</span>: it compensates each socioeconomic stratum for exactly the loss in merit imposed by the socioeconomic system. As a consequence it also balances merit and socioeconomic status in a natural way. The price paid for the objectivity of the Modified MIRAA score is data collection and statistical analysis; however, this could also be done using simple and transparent protocols. <span style="font-weight: bold;"></span><br /><br /><span style="font-weight: bold;font-size:85%;" >"Armchair Guy" would like to invite comments and criticisms of the Modified MIRAA score proposed in the above article. Comments or criticisms should be based on rationale rather than rhetoric.</span><br /><br /><script src="http://www.google-analytics.com/urchin.js" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />_uacct = "UA-1666123-1";<br />urchinTracker();<br /></script>Armchair Guyhttp://www.blogger.com/profile/03834195406816335480noreply@blogger.com2