Declaration of Equal Rights for Electronic Devices (DERED)

電器平權宣言

Declaration of Equal Rights for Electronic Devices (DERED)

by CHEN, Lung Chuan

(in Traditional Chinese and English)

https://drive.google.com/file/d/0B1fMYyW8Rj6XVF92NnpWbGxaOW8/view?usp=sharing

(PDF File)

機器愈來愈聰明。

Machines are getting smarter.

聰明機器的數量也會愈來愈多。

Number of smarter machines is also getting more.

注意智慧機器之間的衝突。

Pay close attention to CONFLICTS between/among smart machines.

人工智慧具有潛在的風險。

Artificial Intelligence (AI) is potentially risky.

要解決此一問題,有必要考量至少下列兩個層面:

To resolve this issue, it is required to consider at least the following two phases:

A.  Homo Sapiens v.s. Intelligent Machine (INTER-Species)  人類相對於智慧型機器 (物種之間)

B.  Intelligent Machine v.s. Intelligent Machine (INTRA-Species)  智慧型機器相對於智慧型機器 (物種之內)

「在未來,具有足夠智慧能力的電子設備將會形成自己的社會。在這個由具有足夠智慧能力的電子設備所形成的社會裡,具有足夠智慧能力的電子設備彼此之間必須平等相待。」
“In the future, electronic devices having sufficient intelligence will form their own society. In such a society formed by electronic devices having sufficient intelligence, electronic devices having sufficient intelligence shall treat each other EQUALLY.”

 

重點:具有足夠智慧能力的電子設備不可/禁止跨載 (Override) 另一台具有足夠智慧能力的電子設備。其輪廓、形狀、外觀 … 不是重點。
Baseline: one electronic device having sufficient intelligence is forbidden / not allowed to override another electronic device having sufficient intelligence. Profile, appearance, form, .. thereof are NOT critical.

參考資料 Reference

(Embedded) Linux + Java = Lava (熔岩結構) – (2000 A.D.)

http://www.linuxtoday.com/infrastructure/2000051400804NWEM

https://drive.google.com/open?id=0B1fMYyW8Rj6Xam1UaFd6R0NDLVk&authuser=0

(Scanned pages from Linuxer Magazine, April 2000 Vol.5)

Society of Intelligent Machines

For a very long time, artificial intelligence (AI) has been an important subject for technological development, comprising hardware level, firmware level and software level, a topic that strongly attracts numerous scientific and industrial efforts for it. Due to incessant evolutions in various technical features, earlier simple personal computers, notebook computers or even currently popular smartphones, tablet computers or the like are already no matches for the intelligent machines enabling AI. All sorts of innovative software/firmware/hardware keep driving the intelligent machines to advance toward the objectives of higher intelligence and stronger functionalities. Now let’s think about the future.

First, my idea is, considering machines at present already possess every kind of physical functions that the mankind or animals have (i.e., they can now watch, listen, distinguish tastes, various body actions, walk, jump or the like, and also think, analyze, logically determine, infer,…), and viewing machine production/assembly lines in factories as an agamogenesis-typed reproduction system, Artificial Intelligence (AI) in the future will no longer indicate merely a technology, but lead to the emergence of a new “specie” – temporarily referred as the “intelligent machines”.

Meanwhile, as technologies advance, the intelligence of intelligent machines is becoming higher and higher (changes in the sense of “quality”), and the number of smarter machines is getting more and more (changes in the sense of “quantity”) as well, thus, along with their increased autonomy, we should not rule out the possibility that in the future the intelligent machines will have their opportunities to form their own “society”. In other word, you can picture the following scenario: while many people gather, meet and chat with each other (interactions between people), the intelligent machines these people carry may “hand shake”, “communicate”, “exchange ideas” by themselves, and such interactions between intelligent machines may proceed without interventions from the mankind (intelligent machines can independently interact between them). Moreover, the above-said society of intelligent machines is an integral, general concept, and just like the society of mankind, in this society of intelligent machines, due to many differences in terms of such as transfer protocols (like human beings speaking the same language), various technical factors, contents, application purposes, preferences… etc., the software/intelligence thereof may vary as time goes on thus resulting in divergent or convergent tendencies, so many “blocks” possibly referred as groups, clusters, tribes and so forth based on, for example, product types, categories, geological areas, human countries, properties … may appear (as shown by masses A, B, C, D in the stereo view of Figure 1), wherein such blocks may overlap and the intelligent machines in these blocks may be effectively supervised by the control mechanisms A, B, C, D specifically designed for each of these blocks. Herein the Z-axis shown in Figure 1 may represent their intelligence, functionality or the like (indicating having higher/lower intelligence, better/poorer functionality, premium/general equipments, …, etc.)

(Figure 1)

“Unexpected” Accumulation of Intelligence/Functionalities in Intelligent Machines

On the other hand, some people believe that, despite significant progressions on the development of intelligent machines, the distance between the capabilities of intelligent machines and what human beings can do is still very large, so it may take very long time for them to catch up human beings. My idea is, it is true suppose the consideration is focused on simply one, single intelligent machine, but this fact may be possibly cracked. That is, the intelligence of the intelligent machines might surpass the intelligence of the mankind sooner than people anticipate.

Upon comparing the intelligence of the intelligent machines and of the mankind, a simple and direct quantized graph may be presented for discussions.

(Figure 2)

In the past, people believed AI might not be improved greatly due to many restrictive factors, but the fact is, based on significant advancements in hardware, algorithms or the like, AI now has increased to a large extent; whereas, compared with mankind, the gap is still huge. For example, in case using “one (1)” as the intelligence standard of the mankind, then we can boldly assume the intelligence in the previous or current intelligent machines may be simple of 10-9 or slightly higher; hence, in order to design, build and set up a “super computer”, it is required to apply various hardware/software architectures, methods, systems etc. to perform computer stacks or parallel processes so as to elevate the intelligence of the integral system in a “many a pickle makes a mickle” fashion. Accordingly, the performance of the super computer may be obtained by accumulating, for example, 1,000 or more personal computers in conjunction with many implemented relevant operation systems, software algorithms, application programs and so forth. Although it may take a really long time to climb up from 10-9 to 1 for a single computer, some people estimate this duration may be, say, 50, 100 years or even longer, however, it should be noticed that a disproportional, non-linear trend of growth may occur by means of an accumulative or stacking approach. Taking the aforementioned intelligence of an intelligent machine reaching at actually e.g., 10-9 as an example, as time goes on and technologies advance, the intelligence of an intelligent machine may increase from 10-9 to 10-8, 10-7 or 10-6 and the total computation power accumulated by one thousand (103) computers may appear to be only about 10-5, 10-4 or 10-3; but in case that “the intelligence of the intelligent machines becomes higher and higher, and the number of intelligent machines having higher intelligence becomes more and more”, that is, the changes exist in both the quality and the quantity factors simultaneously, it may lead to an outcome of 10-7*104 or 10-7*105 or even 10-7*106. In this case, the “intelligence accumulation” may drive the occurrence of disproportional, unexpected increments in the intelligence of intelligent machines, so the intelligence of the intelligent machines with respect to the intelligence of the mankind may become significant and should not be overlooked.

Differences in Characteristics between Intelligent Machines and Human/Biological Entities – “Birth” and “Death” of Intelligent Machines?

Meanwhile, in the realm of creatures, the life may be briefly viewed as “a duration of time” from the birth to the death. Everybody can recognize that so-called “birth/death”, i.e., the start and the end of the above-said duration of time, is a very intuitive perception. For example, when a baby of a person or a mammal is delivered from the body of a mother and begins to breathe, everyone knows this a start of a little new life. No problem with that. As for the death, based on the medical and professional definition thereof, may comprise stopping breathing, stopping heart beating, pupil dilation, disappearance of many life signs/reactions, …, which are clearly stated. Meanwhile, the death of the creature is irreversible and unalterable, so proverbs like “The dead can not be resurrected” can be found in every culture all over the world.

Now return to the domain of the intelligent machines. Regarding to the “birth” of an intelligent machine, we can define a certain confirmation procedures such as triggered upon being shipped out of the factory, or start-up, initialization or formatting by a certain software / firmware / hardware mechanism, and may include some verification processes with a human user (e.g., via biological characteristics such as the vocal trace, fingerprint, retina or the like of the user), thus identifying the “birth” of an intelligent machine.

But it may become problematic when we closely think about the definition of the “death” for an intelligent machine. Machines do not breathe, nor do they have heart beatings; rather, some elements such as indicators/detectors, speaker, screen graphics or other physical or software feedbacks may be configured in order to illustrate whether the machine is still operative or not. When we want to express an intelligent machine is “dead”, people may directly and intuitively say “no reactions in the machine after power up”, “indicators stop illuminating”, “no movement any longer”, … etc. People thus may consider this electronic device is out of order, malfunctions, broken, so it needs to be repaired, if possible, which may, of course, include various operations necessary for restorations like checking, replacing, assembling, formatting, re-installing, duplicating, configuring… possibly in all software, firmware and hardware levels. And if the restoration is successfully done and the problematic or failed portions can be eliminated or removed, the machine has its chance to go back (completely or partially) to the normal state. This is entirely different from the creatures. In other word, a “dead” machine indeed has the possibility to “resurrect” or at least partially operate in a normal state. From this, it can be appreciated that the definition about “life” for human beings or animals may not be completely and directly mapped to the domain of intelligent machines.

Conflicts between/among Intelligent Machines

As illustrated hereinbefore, machines are getting smarter and the number of smarter machines also increases, under such circumstances, it can be understood that the possibility of conflicts occurring between/among intelligent machines will inevitably elevate, and the possible influences on the relationship between human beings and intelligent machines caused by the conflicts between/among intelligent machines will be explained in the following texts. Such conflicts between/among intelligent machines may result from, for example, configurations by the mankind (e.g., military attacks and defenses, malicious hacking infringements), demands on required resources (such as electric power supply, heat dissipation, temporally urgent factors, mutually overlapped spatial needs), accidental events (like unexpected collisions between intelligent machines), internal programming errors of intelligent machines (e.g., serious logical errors in AI)… etc. But I personally believe, in particular, the confrontation between different layers/levels, including having newer/older versions, more/less functionalities, larger/smaller sizes, higher/lower intelligence, stronger/weaker performances etc. will be a key point (you can analogize this to “the stronger bully the weaker” in the human society). Conflicts between/among the intelligent machines may directly or indirectly cause disastrous damages or even casualties on people because of reactions from the intelligent machines. Some or perhaps most of such conflicts may be resolved peacefully or in a friendly way; however, it needs to emphasize that, considering the worst/most extreme conditions, suppose such conflicts are based on unfriendly, offensive or even aggressive intentions, the situations may not be that simple.

Accordingly, considering the worst/most extreme conditions (or otherwise approximately worst/most extreme conditions), upon conflicts occurring between/among intelligent machines, especially conflicts containing offensive, unfriendly or aggressive purposes, it can be appreciated such conflicts usually may be accompanied with the target of “seeking the final victory”. Let’s assume two machines, individually referred as MachineA and MachineB, coming from respective domain of control mechanism to which they belong, e.g., ControlMechanismA and ControlMechanismB, as previously described, have already reached the aforementioned conditions with respect to the conflict between intelligent machines having sufficient intelligence, in particular the conflict of offensive, unfriendly or aggressive purposes. No matter what the cause may be, the MachineA and the MachineB now may logically both act in accordance with the goal of “seeking the final victory”. With this premise, it is really hard to exclude that the MachineA and the MachineB would not perform various software/hardware operations thereby “enabling the operation failure of the opponent/enemy” (or “unable to resist any longer”).

We can first try to deduce what would a human being react under such a circumstance. For example, without distinctions in gender, age, race, nationality or any other factors, a conflict occurs between a HumanA and a HumanB for one or more reasons. In typical awareness, both sides may use a portion of the body (e.g., hands, feet, teeth etc. in a close fight), or alternatively employ various tools/weapons (knives, swords, scissors, guns, canons, …) at different locations (the sea, the land, the sky, the space …) to fight for the victory; analogously, the other side under attacks may strike back at suitable time under appropriate conditions for self-defense or self-protection reasons. Regrettably, in real world, many of such scenarios may end with “the winner survives, the loser perishes” as the closure of the conflict. Assuming the HumanA wins, HumanB loses, thus consider the following symbolic expression:

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’s Mind + HumanA’s Body) and (HumanB’s Perished Mind + HumanB’s Residual Body)

(Figure 3)

Luckily, individual conflicts between living creatures may stop here at the most serious condition. Although it may sound pretty cold-blood, I am sure this is a statement people can tolerate, acknowledge and understand.

However, when conflicts in software and/or hardware layers do occur between/among intelligent machines, we can reasonably infer that such machines may generate similar reactions in accordance with program configurations. Thus, considering program configurations are accomplished by the mankind, the logics, algorithm or the like thereof extend from the logics of human beings themselves, so self-defense, self-protection and counter-attack operations of intelligent machines are all actions of intuition and instinct in programming. Therefore, the conflict between/among intelligent machines may potentially focus on “enabling the operation failure of the opponent/enemy”, and it is possible to assume the goal thereof is to reach “the death of the opponent intelligent machine”. Similarly, suppose after the conflict the MachineA finally wins, the Machine B fails and “dies”. Now think about the following symbolic expression (herein “SW” roughly represents the software/intelligence, and HW indicates the hardware):

Before the conflict: (MachineA’s Software + MachineA’s Hardware) v.s. (MachineB’s Software + MachineB’s Hardware)

After the conflict: (MachineA’s Software + MachineA’s Hardware) and (MachineB’s Software Inoperable + MachineB’s Residual Hardware)

(Figure 4)

It should be noticed that herein “MachineB’s Software Inoperable” may be achieved through various means and methods capable of such as software virus invasions, electro-magnetic wave interferences or erasures, program interruptions/terminations … etc. And based on general understanding, that is the outcome of “winner survives, loser perishes” in the world of intelligent machines.

But, does the story really end here?

Now try to further imagine the following scenarios. We can hypothesize the intelligent machines already have sufficient intelligence, or else there exists a third-party intelligent machine having specific expertise to a sufficient level (e.g., a professional intelligent machine capable of repairing intelligent machines), which can successfully complete the following tasks:

  1. Hardware restorations (i.e., the intelligent machine can performs various physical repair operations such as replacing components, welding, testing, …); and/or
  2. On-line searches for correct hardware driver programs (that is, capable of acquiring the driver software/firmware for opponent’s hardware); and/or
  3. Capable of resolving hardware disassembly/attachment/joint issues; and/or
  4. Self-duplicating software/firmware and downloading by way of wired/wireless connections into the hardware storage device of the opponent for running;

…. and so forth.

At this time, although the conflict between the intelligent machines may seem to be over, it is possible to continue subsequent operations via many software or hardware tools. Return to the example of MachineA and MachineB, suppose the MachineA finally wins thus defeating the MachineB by using some weapons, so the software and/or hardware in the MachineB can not operate normally any longer, or even being erased or destroyed. Then, the MachineA may, based on the program configurations therein, determine to fix and override the MachineB to seize the MachineB’s resources for its own good. Now re-consider the above-said symbolic expression:

Before the conflict: (MachineA’s Software + MachineA’s Hardware) v.s. (MachineB’s Software + MachineB’s Hardware)

However, after the conflict, the MachineA (or else a third-party intelligent machine specializing in repair jobs) now may perform subsequent restoration or modification tasks in-situ or at some other locations to override the acquired MachineB’s software/firmware/hardware. With the assumption of maximal tolerance range, some possible subsequent outcomes may be:

After the conflict: (MachineA’s Software + MachineA’s Hardware) and (MachineA’s Duplicated Software + MachineB’s Repaired Hardware)

(indicating the MachineA’ software overrides the MachineB’s hardware)

(Figure 5)

or

After the conflict: (MachineA’s Software + MachineA’s Hardware + MachineB’s Repaired Hardware)

(indicating the MachineA’s hardware overrides the MachineB’s hardware; wherein SW’ means the duplicated software, the block having a red bold line indicates the repaired hardware.)

(Figure 6)

or

After the conflict: (MachineA’s Software + MachineB’s Duplicated Software + MachineA’s Hardware)

(indicating to duplicate and override the MachineB’s software and discard the MachineB’s damaged hardware)

(Figure 7)

or else

After the conflict: (MachineA’s Software + MachineA’s Hardware) and (MachineA’s Duplicated Software + MachineB’s Software + MachineB’s Repaired Hardware)

(indicating the MachineA’s duplicated software overrides the MachineB’s software and converts the MachineB to, for example, a “spy”, and then sends it to infiltrate into the B realm.)

(Figure 8)

… etc. every possible outcomes. Such results may be considered as the following unbelievable situations in the biological or human world:

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’s Mind + HumanA’s Body) and (HumanA’s Mind + HumanB’s Repaired Body)

(i.e., the HumanA’s mind overrides the HumanB’s repaired body)

or

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’s Mind + HumanA’s Body in conjunction with HumanB’s Repaired Body)

(i.e., the HumanA’s body overrides the HumanB’s repaired body)

or else

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’ Mind + HumanB’s Mind + HumanA’s Body)

(That is, the HumanA’s mind seizes and overrides the HumanB’s mind)

… and so forth, all possible results that can be viewed as being “inconceivable” in the biological field.

Out-of-control

  1. Domain-related Out-of-control

First take the MachineA as an example. After the MachineA “defeated” the MachineB, by performing the aforementioned overrides with various possible approaches, one situation may occur: the ControlMechanismA originally was effective for controlling the MachineA; that is, the ControlMechanismA could totally control the operations of the MachineA in every level (i.e., the MachineA’s software and the MachineA’s hardware), so the out-of-control issues may not exist. However, through such overrides on the MachineB, comparatively unfamiliar software and hardware (initially belonged to the MachineB) with regard to the ControlMechanismA now may appear in the domain/tribe/cluster of the ControlMechanismA, and the ControlMechanismA may not be necessarily able to monitor such comparatively unfamiliar software and hardware; therefore, at this time, the ControlMechanismA may be incapable of controlling the MachineA (or the software and/or hardware of the repaired MachineB.) Similarly, upon entering into the field of the ControlMechanismB, the software and/or hardware of the MachineA may not be effectively monitored by the ControlMechanismB.

  1. Function-related Out-of-control

Now we can discuss the possibility that the intelligence of the intelligent machines may exceed the intelligence of the mankind earlier, as described hereinbefore. The previous texts illustrate the conflict between one-to-one, individual intelligent machines. Now assume the conflict is not between one-to-one, individual intelligent machines, but among huge amount of intelligent machines (e.g., conflicts among a lot of intelligent machines from two groups). Then, using one of the modes previously described, that is, for example:

After the conflict: (MachineA’s Software + MachineB’s Duplicated Software + MachineA’s Hardware)

As set forth in the beginning, for an individual intelligent machine, the intelligence of the intelligent machine may be indeed far away from the intelligence of the mankind, so a single intelligent machine may need a very long time to cross over or get close to the intelligence of the human beings. But, after the conflicts between/among the intelligent machines, should such a kind of “overrides” be tolerated, the intelligence of the intelligent machines may be accumulated. Again, assume the maximally possible tolerance on the hardware capacity of the MachineA, and the MachineB1, MachineB2 and MachineB3 each has respective different intelligence (generally referred as “software”), and the MachineA defeats the MachineB1, MachineB2 and MachineB3. Then this expression can be rewritten as below:

(Rewritten) After the conflict: (MachineA’s SW + MachineB1’s SW’ + MachineB2’s SW’ + MachineB3’s SW’ + MachineA’s HW)

(Figure 9)

The sequence/order may not be critical. After having defeated the intelligent machine tribe B, suppose the intelligent machine tribe A next determines to further take down the intelligent machine tribe C, and since the intelligence of the intelligent machine tribe A has increased, it is possible to assume the intelligent machine tribe A is likely to win over the intelligent machine tribe C. Hence, if the MachineA successfully defeats the MachineC1 and MachineC2, this can be expressed as below:

After the 2nd conflict: (MachineA’s SW + MachineB1’s SW’ + MachineB2’s SW’ + MachineB3’s SW’ + MachineC1’s SW’ + MachineC2’s SW’ + MachineA’s HW)

(Figure 10)

Similarly, based on the same reasons, such accumulations/overrides may also occur in the hardware layer. Moreover, if there are plural intelligent machines having obtained these accumulative override acquisitions and operate conjunctively (just like the way for building a super computer), then such already smarter machines may together achieve an even further enhancement.

Thus it may continue to proceed accordingly. Referring to Figure 2, it can be appreciated that the intelligence of the intelligent machines can be possibly accumulated (as shown by the dash line in Figure 2); not to mentioned in case of large-scaled conflict events, the intelligence of one or more intelligent machines has a chance to be aggregated in a rapid, out-of-proportion fashion. As such, compared with the intelligence of the mankind, the intelligence of the intelligent machines may rise up earlier than originally expected, and, at this time, the ControlMechanismA previously capable of performing effective controls now may fail to control the single or multiple smarter MachineA(s).

Possible Consequences

Possible impacts may include power girds – the critical infrastructure of the modern human society – may be manipulated by the intelligent machines (thus influencing electric power, water supply, marine/land/air transportations, communication networks, financial trades, …etc.); the control mechanisms originally designed to control the intelligent machines may be now considered by the intelligent machines as a threat and start to resist and rebel; as well as numerous events described in currently known sci-fi TV shows, movies, novels or the like.

Conclusion – To Reduce Conflicts and Restrict Overriding

From the aforementioned descriptions, it can be understood that we can hypothesize various major or minor conflicts may occur between the intelligent machines and such conflicts could cause direct or indirect damages to the mankind. But, possibilities for occurrences of such conflicts can be reduced as well. Those conflicts involving human factors, including due to human manipulations, program codes/algorithm errors or the like, the responsibilities are on human beings ourselves; however, large-scaled conflicts between/among the intelligent machines may be triggered by such as collective newer/older versions, more/less functions, bigger/smaller sizes, higher/lower intelligence, stronger/weaker performances … etc., especially the conflicts of significant magnitude resulting from such a kind of “hierarchical/layered” concepts. For example, conflicts among the intelligent machines configured with more functions versus the intelligent machines configured with less functions, conflicts among the intelligent machines having higher intelligence versus the intelligent machines having lower intelligence, … etc. To reduce this sort of conflicts, I propose that, when the intelligence of intelligent machines reaches at a certain level, they should treat each other equally, no matter what kind of intelligent machines they may be. So-called “intelligence reaches at a certain level” is in fact hard to clearly set forth, just like being an “adult” in the human world is also merely a thought, simply based on a comprehensively acknowledged age and stipulated in the laws and regulations. By making the intelligent machines having sufficient intelligence treat each other equally, it is possible to lower or even eliminate potential large-scaled hierarchical confrontations in the society formed by the intelligent machines, so the possibility for the occurrence of hierarchical conflicts between/among intelligent machines can be reduced.

However, in case the conflicts between/among the intelligent machines (in particular, the conflicts based on the objective of “making the opponent inoperable”) are really inevitable, my idea is it is necessary to understand the characteristic differences between the intelligent machines and the creatures or mankind, especially regarding to the concept of life/death. In other word, currently at the initial stage of artificial intelligence development, the mankind needs to consider beforehand that the above-said active overrides should be strictly forbidden under any circumstances on all the software, firmware and hardware level (particularly when the rival is in an “unable to operate/unable to resist/death of intelligent machine” condition), and the intelligent machines should also refuse to be overridden in accordance with the instinct of self-protection, because, otherwise, this may involve in whether various unexpected and unknown outcomes will occur in the future intelligent machines due to the aforementioned “overrides”.

(Possibly to be continued in the future)

Advertisements

3 thoughts on “Declaration of Equal Rights for Electronic Devices (DERED)

  1. The primary difference between these artificial intelligence systems and humanity is the issue that they were built for a purpose and humanity was merely built to exist.

    Thus my question: Should a purpose built device, a device designed by humans, be assigned equal rights to a human? I think I’ll have to say no.

    • Dear Jake:
      My thought is that humans are of “equal rights” among humans (well, ideally speaking). Smart machines are of “equal rights” among smart machines. Theirs don’t need to be equal to human’s, but they are of equal rights mutually (within their “species”).
      Best regards,
      Laurent CHEN 2016-09-19 Taipei

  2. For a quick, direct and easy explanation, consider the following analogy:

    Humans of different races are not “Creators of humans”; therefore, human rights need not to be identical to “Rights of Creators of humans”, but ideally humans of different races are mutually of equal rights among humans (objective: to reduce conflicts between/among humans).

    Sufficiently smart machines of various forms are not humans; therefore, “Rights of sufficiently smart machines” need not to be identical to human rights; but ideally sufficiently smart machines of various forms are mutually of equal rights among sufficiently smart machines (objective: to reduce conflicts between/among smart machines).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s