Etiene Dalcol (@etiene_d): The field of artificial intelligence has a huge problem of western thought process and denial of colonial past🧵
Etiene Dalcol (@etiene_d): I find thought experiments created to induce fear of AI (even if the main purpose is a call for us to be responsible) to be bizarre sometimes. Certain beliefs might even be self-fulfilling prophecies to make AI horribly dangerous.
Etiene Dalcol (@etiene_d): One example is the Stamp Collector AI. Described by Robert Miles, it's a hypothetical future AI with the capacity of true general intelligence. This AI is created for a simple goal: getting stamps for the stamp collector human who made it. But then it becomes a murder machine.
Etiene Dalcol (@etiene_d): The tale goes a bit like this: the AI could outbid someone on eBay and get 20 stamps. But because it is very smart, it can find out other ways to get even more stamps. Suppose wiping humanity and recycling their atoms to create stamps yields the most stamps, we're doomed.
Etiene Dalcol (@etiene_d): This storytale concludes that the AI is dangerous because it is not human. It cannot make an ethical decision. It will have disastrous consequences the moment it is turned on into existence.
But there are several problems with this conclusion.
Etiene Dalcol (@etiene_d): One issue is that the tale makes some assumption of "what is people" and "what do we want" and "how we make decisions" that does not exist in reality. I'm not saying an AI like this would not be dangerous. It would. But because it is EXACTLY LIKE HUMANS, or at least ~some~ humans
Etiene Dalcol (@etiene_d): See, the big contradiction is that a human version of this tale is not hypothetical, nor in the future. It's called colonialism. Humans commited several genocides to maximize sugar.
Etiene Dalcol (@etiene_d): This refusal to think about AI as human-like in this context is nothing short of a denial to confront some failings of humans. It has nothing to do with AI.
Etiene Dalcol (@etiene_d): If we understand a dehumanizing process either as dissolving or preventing formation, we can see how this also has gone wrong already and keeps going wrong. It's a moral protection to justify us using other beings as objects. "If it's not really human, it's ok, right?"
Etiene Dalcol (@etiene_d): The other problem, which is related to the last one, is the error of modelling "intelligence" as an optimization problem. Even if you disregard speed trade-offs, the act of imagining the pinnacle of intelligence as "finding the absolute best solution" is very cultural.
Etiene Dalcol (@etiene_d): This topic is very explored in psychology, some people are maximizers and others are satisficers, and this has implications on mental health too. But maximization in particular is related to the european scholarly tradition of idealizing absolute rationality and objectivity.
Etiene Dalcol (@etiene_d): I see this reflected on thinking of super intelligence as something free from subjectivity and being able to access an accurate model of reality, unlike humans.
Etiene Dalcol (@etiene_d): These thought experiments don't usually propose solutions for this problem, so I'm not sure if the intent is that we don't try to create a general AI, or that we try to patch it somehow to avoid this.
Etiene Dalcol (@etiene_d): Patching the AI with some type of human-like decision would bring us back to my first issue and raise the following very important question: which human?
Etiene Dalcol (@etiene_d): The last problem is related to what accessing some accurate model of reality means and if that's even possible. In the end it really depends how you want to define this "general AI"
Etiene Dalcol (@etiene_d): Taking into account Gödel's proofs, to achieve more expressive power, a system has to sacrifice some traits: correctness, completeness, or decidability. To be able to achieve hard AI generalization, we'll likely end up with something human-like, endowed with subjectivity.
Etiene Dalcol (@etiene_d): The worst thing that can happen though, is ending up with exactly the same thing, but we (and the AI) believe it is reliable and complete.
Etiene Dalcol (@etiene_d): To sum up, the danger of artifical intelligence is not the artificial part, it's the intelligence part, and the assumptions about it.