A Right to/in Robots
All technologies are, to some extent, beautifully polluted with the stain of human needs, wants and whims. Or rather, the way we process such wants and needs, the way we seek to fulfill them, are imprinted in our resulting technologies, sometimes clumsily. The idea that values are present in design in many complex manners is not an obscure topic in science and technology studies. This notion only gets more sophisticated, or sincere, with arguments about the many correlations between law and code. Based on understandings such as these, the extent to which there might be a duty for technology to be minimally democratic has been debated. Insofar as a recognition of technology’s representative duties has been pondered, and in light if the particular implications of robotics to human affairs, we could ponder the possibility of a right to robots, and trace the boundaries of its implications.
Mark B. Brown has argued that the public has a legitimate claim to participate in surveying technological development. Taking from the theory of democratic representation, Brown makes a comparison of the way political systems and technology represent their different “constituents”. He holds that, sooner or later, technology may stand for diverse social interests, and as such, there is indeed a valid claim for technology to represent democratically grounded interests. In this sense, what authors like Brown analyze is not just a right to technology, but rights in technology; or better stated, rights to have technology represent the values of its users. If, as Lawrence Lessig has argued, “code is law”, the importance of gauging technology’s capacity to regulate democratically becomes all the more relevant. Although Lessig’s claim has often been both, legitimately criticized and misunderstood, it should be fairly reasonable to expect designers to have some inherence in those processes that shape important social practices.
What about robots, then? How would this all play out in the context of technologies that seem to be based on humans themselves as models? If one thinks of robotics and AI as complementary technological fields, what would an attempt to structure a valid ethical approach to values in design and representative technology imply? Is there a “right to robots” (or in robots) unavoidable, all the more due to the very nature of robotics?
We can easily discard two things we should not worry about, at least immediately. First, a right to robots need not imply a descent into infinity, with designers seeking to create robots with values, and thus giving rise to a right to robots for robots. A loony idea, no doubt, but think about it: as a technology, robotics is not exempt from Langdon Winner’s argument that artifacts have politics. Winner’s take is not merely that a designer has a “mad scientist” potential and that an artifact’s politics is the direct product of a designers premeditated goals. For Winner artifacts do not represent a value system, they embody it. This means that, insofar as designers are both, consciously and tacitly members of a certain social network, the very process of designing will inherit, either directly or indirectly, the network’s imperatives. Furthermore, the network is not mechanically oriented, in the sense that we could foresee with precision how it will react to technical innovations. However neutral or solely efficiency oriented a designer might be, an artifact can be read and re-read by the group in diverse ways.
This all gets a bit more complex with robotics. There is a second order situation in robotics, as far as it is a value laden technology that seeks to create something using a value beholder as a model. If, as some might argue, robotics’ and AI ultimate goal is to emulate human dexterity and reasoning, one could argue that one of robotcs’ values in design is to create an entity able to conceive, and therefore play with, values in design! Hence the threat of infinite recursiveness.
While still a bit on the eccentric side, this first false problem with a right to robots leads us to the second: a right to robots does not mean that every household should, whenever possible, have the right to a robot; it does not mean everyone is entitled to a particular piece of hardware, wanted or not, needed or not. In this sense, a right to robots does not entitle a citizen to an explicitly anthropomorfied appliance. What it would mean is that robotics should proceed by openly acknowledging what authors like Brown, Lessig and Winner have described as some of the working relationships between technology and social systems. The design of robots could be democratic in itself, meaning that measures might have to be taken to work with what values are involved in the process. Robotics is an interesting case of technology as value laden, because it seeks, or is deemed to seek, an emulation of entities that are affected by values. The process of democratizing robotics, in the sense that it needs to be representative in Brown’s terms, could be robotics ultimate tacit goal.
The right to/in robots, insofar as they would be expected to “perform as us”, and eventually become autonomous, should be built into the design in a twofold way: first by tending to what the users need and want as a group, as in Brown’s version of the representative duties of technology; and second, as an acknowledgement of these being artifacts that echo human values in more sophisticated ways than, say, a washing machine. So while we should not expect a State financed R2 unit in the mail anytime soon, contemporary robotics and AI research would do well to embrace the fact that all technologies channel a number of values, and that robotics’ inherent recursiveness only underscores the implications of this.
What rights those anthropomorphized artifacts could eventually have themselves, or perhaps even to/in us, is beyond our present scope, and is perhaps better left for musings by the more enlightened minds in the subject, and by thoroughly challenging and quite entertaining sci-fi productions.