About the ANCIENT CHINESE ROCK WRITINGS CONFIRM EARLY TRANS-PACIFIC INTERACTION

(Summary)  July 21, 2015

https://drive.google.com/file/d/0B1fMYyW8Rj6XTzdmZGxfTUVneFE/view?usp=sharing

====================================================================

(Subsequent ideas archived on July 16, 2015, Taipei, Taiwan)

Cartouche 1.  乚 丩 旬 丩

(Together we hide/left, ten years together)

Cartouche 2.  齒「呂」回「呂」

(Speaking of “City of Singing”, returning to “City of Singing”)

Cartouche 3.  寅 虎 回 丩

(In Year of Tiger, we return together)

==================================

Horizontal lines in Cartouche 2 indicate “” (Quote)

(Modified – These notations are similarly used as 私名號 / 專名號)

“呂”

http://www.vividict.com/WordInfo.aspx?id=628

For the two words in lower right and lower left of Cartouche 2, the bigger square in both words indicates “City” (imagine the city walls), while one smaller square in the lower right and two smaller squares in the lower left both mean “SINGING”. They just did not separate the two smaller squares so you may think this is「日」, but it is not「日」, rather it is「呂」.

(呂,唱歌,Solo or Chorus,「City of Singing 歌城/音律之城」,朝歌城,The capital of Shang Dynasty)

One small square – Solo (Singing)

Two small squares – Chorus (Singing)

Also, for the SYMMETRY reason, if you compare the Cartouche 1 and Cartouche 2, you can find the correspondence (丩 and丩 in Cartouche 1, while「呂」and「呂」in Cartouche 2). Besides, in Cartouche 1, these two 丩 are left-right reversed; in Cartouche 2, these two「呂」are in one-two relevance (solo v.s. chorus).

================================

These reports 新聞報導說明

http://www.dailymail.co.uk/sciencetech/article-3152556/Did-China-discover-AMERICA-Ancient-Chinese-script-carved-rocks-prove-Asians-lived-New-World-3-300-years-ago.html
Did China discover AMERICA? Ancient Chinese script carved into rocks may prove Asians lived in New World 3,300 years ago

and 以及

http://udn.com/news/story/6812/1047910
發現象形文字 中國人比哥倫布更早登美洲

In Mr.John Ruskamp’s article,
http://www.asiaticechoes.org/PDF/ChineseRockWriting.pdf
ANCIENT CHINESE ROCK WRITINGS CONFIRM EARLY TRANS-PACIFIC INTERACTION

Figure 7 on Page 8, “Cartouche 3”, there is an unknown symbol on the right side. My conjecture is it indicates TIGER (虎) which may describe the “寅虎年”. (the year of tiger)

在作者的文章第 8 頁的圖 7 (Cartouche 3) 右側有個未知符號,我猜測是「虎」,意思是「寅虎年」。(佔有兩格,應該是個「詞」而非單一個「字」)

2015-07-12

===============================================

Declaration of Equal Rights for Electronic Devices (DERED)

電器平權宣言

Declaration of Equal Rights for Electronic Devices (DERED)

by CHEN, Lung Chuan

(in Traditional Chinese and English)

https://drive.google.com/file/d/0B1fMYyW8Rj6XVF92NnpWbGxaOW8/view?usp=sharing

(PDF File)

機器愈來愈聰明。

Machines are getting smarter.

聰明機器的數量也會愈來愈多。

Number of smarter machines is also getting more.

注意智慧機器之間的衝突。

Pay close attention to CONFLICTS between/among smart machines.

人工智慧具有潛在的風險。

Artificial Intelligence (AI) is potentially risky.

要解決此一問題,有必要考量至少下列兩個層面:

To resolve this issue, it is required to consider at least the following two phases:

A.  Homo Sapiens v.s. Intelligent Machine (INTER-Species)  人類相對於智慧型機器 (物種之間)

B.  Intelligent Machine v.s. Intelligent Machine (INTRA-Species)  智慧型機器相對於智慧型機器 (物種之內)

「在未來,具有足夠智慧能力的電子設備將會形成自己的社會。在這個由具有足夠智慧能力的電子設備所形成的社會裡,具有足夠智慧能力的電子設備彼此之間必須平等相待。」
“In the future, electronic devices having sufficient intelligence will form their own society. In such a society formed by electronic devices having sufficient intelligence, electronic devices having sufficient intelligence shall treat each other EQUALLY.”

 

重點:具有足夠智慧能力的電子設備不可/禁止跨載 (Override) 另一台具有足夠智慧能力的電子設備。其輪廓、形狀、外觀 … 不是重點。
Baseline: one electronic device having sufficient intelligence is forbidden / not allowed to override another electronic device having sufficient intelligence. Profile, appearance, form, .. thereof are NOT critical.

參考資料 Reference

(Embedded) Linux + Java = Lava (熔岩結構) – (2000 A.D.)

http://www.linuxtoday.com/infrastructure/2000051400804NWEM

https://drive.google.com/open?id=0B1fMYyW8Rj6Xam1UaFd6R0NDLVk&authuser=0

(Scanned pages from Linuxer Magazine, April 2000 Vol.5)

Society of Intelligent Machines

For a very long time, artificial intelligence (AI) has been an important subject for technological development, comprising hardware level, firmware level and software level, a topic that strongly attracts numerous scientific and industrial efforts for it. Due to incessant evolutions in various technical features, earlier simple personal computers, notebook computers or even currently popular smartphones, tablet computers or the like are already no matches for the intelligent machines enabling AI. All sorts of innovative software/firmware/hardware keep driving the intelligent machines to advance toward the objectives of higher intelligence and stronger functionalities. Now let’s think about the future.

First, my idea is, considering machines at present already possess every kind of physical functions that the mankind or animals have (i.e., they can now watch, listen, distinguish tastes, various body actions, walk, jump or the like, and also think, analyze, logically determine, infer,…), and viewing machine production/assembly lines in factories as an agamogenesis-typed reproduction system, Artificial Intelligence (AI) in the future will no longer indicate merely a technology, but lead to the emergence of a new “specie” – temporarily referred as the “intelligent machines”.

Meanwhile, as technologies advance, the intelligence of intelligent machines is becoming higher and higher (changes in the sense of “quality”), and the number of smarter machines is getting more and more (changes in the sense of “quantity”) as well, thus, along with their increased autonomy, we should not rule out the possibility that in the future the intelligent machines will have their opportunities to form their own “society”. In other word, you can picture the following scenario: while many people gather, meet and chat with each other (interactions between people), the intelligent machines these people carry may “hand shake”, “communicate”, “exchange ideas” by themselves, and such interactions between intelligent machines may proceed without interventions from the mankind (intelligent machines can independently interact between them). Moreover, the above-said society of intelligent machines is an integral, general concept, and just like the society of mankind, in this society of intelligent machines, due to many differences in terms of such as transfer protocols (like human beings speaking the same language), various technical factors, contents, application purposes, preferences… etc., the software/intelligence thereof may vary as time goes on thus resulting in divergent or convergent tendencies, so many “blocks” possibly referred as groups, clusters, tribes and so forth based on, for example, product types, categories, geological areas, human countries, properties … may appear (as shown by masses A, B, C, D in the stereo view of Figure 1), wherein such blocks may overlap and the intelligent machines in these blocks may be effectively supervised by the control mechanisms A, B, C, D specifically designed for each of these blocks. Herein the Z-axis shown in Figure 1 may represent their intelligence, functionality or the like (indicating having higher/lower intelligence, better/poorer functionality, premium/general equipments, …, etc.)

(Figure 1)

“Unexpected” Accumulation of Intelligence/Functionalities in Intelligent Machines

On the other hand, some people believe that, despite significant progressions on the development of intelligent machines, the distance between the capabilities of intelligent machines and what human beings can do is still very large, so it may take very long time for them to catch up human beings. My idea is, it is true suppose the consideration is focused on simply one, single intelligent machine, but this fact may be possibly cracked. That is, the intelligence of the intelligent machines might surpass the intelligence of the mankind sooner than people anticipate.

Upon comparing the intelligence of the intelligent machines and of the mankind, a simple and direct quantized graph may be presented for discussions.

(Figure 2)

In the past, people believed AI might not be improved greatly due to many restrictive factors, but the fact is, based on significant advancements in hardware, algorithms or the like, AI now has increased to a large extent; whereas, compared with mankind, the gap is still huge. For example, in case using “one (1)” as the intelligence standard of the mankind, then we can boldly assume the intelligence in the previous or current intelligent machines may be simple of 10-9 or slightly higher; hence, in order to design, build and set up a “super computer”, it is required to apply various hardware/software architectures, methods, systems etc. to perform computer stacks or parallel processes so as to elevate the intelligence of the integral system in a “many a pickle makes a mickle” fashion. Accordingly, the performance of the super computer may be obtained by accumulating, for example, 1,000 or more personal computers in conjunction with many implemented relevant operation systems, software algorithms, application programs and so forth. Although it may take a really long time to climb up from 10-9 to 1 for a single computer, some people estimate this duration may be, say, 50, 100 years or even longer, however, it should be noticed that a disproportional, non-linear trend of growth may occur by means of an accumulative or stacking approach. Taking the aforementioned intelligence of an intelligent machine reaching at actually e.g., 10-9 as an example, as time goes on and technologies advance, the intelligence of an intelligent machine may increase from 10-9 to 10-8, 10-7 or 10-6 and the total computation power accumulated by one thousand (103) computers may appear to be only about 10-5, 10-4 or 10-3; but in case that “the intelligence of the intelligent machines becomes higher and higher, and the number of intelligent machines having higher intelligence becomes more and more”, that is, the changes exist in both the quality and the quantity factors simultaneously, it may lead to an outcome of 10-7*104 or 10-7*105 or even 10-7*106. In this case, the “intelligence accumulation” may drive the occurrence of disproportional, unexpected increments in the intelligence of intelligent machines, so the intelligence of the intelligent machines with respect to the intelligence of the mankind may become significant and should not be overlooked.

Differences in Characteristics between Intelligent Machines and Human/Biological Entities – “Birth” and “Death” of Intelligent Machines?

Meanwhile, in the realm of creatures, the life may be briefly viewed as “a duration of time” from the birth to the death. Everybody can recognize that so-called “birth/death”, i.e., the start and the end of the above-said duration of time, is a very intuitive perception. For example, when a baby of a person or a mammal is delivered from the body of a mother and begins to breathe, everyone knows this a start of a little new life. No problem with that. As for the death, based on the medical and professional definition thereof, may comprise stopping breathing, stopping heart beating, pupil dilation, disappearance of many life signs/reactions, …, which are clearly stated. Meanwhile, the death of the creature is irreversible and unalterable, so proverbs like “The dead can not be resurrected” can be found in every culture all over the world.

Now return to the domain of the intelligent machines. Regarding to the “birth” of an intelligent machine, we can define a certain confirmation procedures such as triggered upon being shipped out of the factory, or start-up, initialization or formatting by a certain software / firmware / hardware mechanism, and may include some verification processes with a human user (e.g., via biological characteristics such as the vocal trace, fingerprint, retina or the like of the user), thus identifying the “birth” of an intelligent machine.

But it may become problematic when we closely think about the definition of the “death” for an intelligent machine. Machines do not breathe, nor do they have heart beatings; rather, some elements such as indicators/detectors, speaker, screen graphics or other physical or software feedbacks may be configured in order to illustrate whether the machine is still operative or not. When we want to express an intelligent machine is “dead”, people may directly and intuitively say “no reactions in the machine after power up”, “indicators stop illuminating”, “no movement any longer”, … etc. People thus may consider this electronic device is out of order, malfunctions, broken, so it needs to be repaired, if possible, which may, of course, include various operations necessary for restorations like checking, replacing, assembling, formatting, re-installing, duplicating, configuring… possibly in all software, firmware and hardware levels. And if the restoration is successfully done and the problematic or failed portions can be eliminated or removed, the machine has its chance to go back (completely or partially) to the normal state. This is entirely different from the creatures. In other word, a “dead” machine indeed has the possibility to “resurrect” or at least partially operate in a normal state. From this, it can be appreciated that the definition about “life” for human beings or animals may not be completely and directly mapped to the domain of intelligent machines.

Conflicts between/among Intelligent Machines

As illustrated hereinbefore, machines are getting smarter and the number of smarter machines also increases, under such circumstances, it can be understood that the possibility of conflicts occurring between/among intelligent machines will inevitably elevate, and the possible influences on the relationship between human beings and intelligent machines caused by the conflicts between/among intelligent machines will be explained in the following texts. Such conflicts between/among intelligent machines may result from, for example, configurations by the mankind (e.g., military attacks and defenses, malicious hacking infringements), demands on required resources (such as electric power supply, heat dissipation, temporally urgent factors, mutually overlapped spatial needs), accidental events (like unexpected collisions between intelligent machines), internal programming errors of intelligent machines (e.g., serious logical errors in AI)… etc. But I personally believe, in particular, the confrontation between different layers/levels, including having newer/older versions, more/less functionalities, larger/smaller sizes, higher/lower intelligence, stronger/weaker performances etc. will be a key point (you can analogize this to “the stronger bully the weaker” in the human society). Conflicts between/among the intelligent machines may directly or indirectly cause disastrous damages or even casualties on people because of reactions from the intelligent machines. Some or perhaps most of such conflicts may be resolved peacefully or in a friendly way; however, it needs to emphasize that, considering the worst/most extreme conditions, suppose such conflicts are based on unfriendly, offensive or even aggressive intentions, the situations may not be that simple.

Accordingly, considering the worst/most extreme conditions (or otherwise approximately worst/most extreme conditions), upon conflicts occurring between/among intelligent machines, especially conflicts containing offensive, unfriendly or aggressive purposes, it can be appreciated such conflicts usually may be accompanied with the target of “seeking the final victory”. Let’s assume two machines, individually referred as MachineA and MachineB, coming from respective domain of control mechanism to which they belong, e.g., ControlMechanismA and ControlMechanismB, as previously described, have already reached the aforementioned conditions with respect to the conflict between intelligent machines having sufficient intelligence, in particular the conflict of offensive, unfriendly or aggressive purposes. No matter what the cause may be, the MachineA and the MachineB now may logically both act in accordance with the goal of “seeking the final victory”. With this premise, it is really hard to exclude that the MachineA and the MachineB would not perform various software/hardware operations thereby “enabling the operation failure of the opponent/enemy” (or “unable to resist any longer”).

We can first try to deduce what would a human being react under such a circumstance. For example, without distinctions in gender, age, race, nationality or any other factors, a conflict occurs between a HumanA and a HumanB for one or more reasons. In typical awareness, both sides may use a portion of the body (e.g., hands, feet, teeth etc. in a close fight), or alternatively employ various tools/weapons (knives, swords, scissors, guns, canons, …) at different locations (the sea, the land, the sky, the space …) to fight for the victory; analogously, the other side under attacks may strike back at suitable time under appropriate conditions for self-defense or self-protection reasons. Regrettably, in real world, many of such scenarios may end with “the winner survives, the loser perishes” as the closure of the conflict. Assuming the HumanA wins, HumanB loses, thus consider the following symbolic expression:

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’s Mind + HumanA’s Body) and (HumanB’s Perished Mind + HumanB’s Residual Body)

(Figure 3)

Luckily, individual conflicts between living creatures may stop here at the most serious condition. Although it may sound pretty cold-blood, I am sure this is a statement people can tolerate, acknowledge and understand.

However, when conflicts in software and/or hardware layers do occur between/among intelligent machines, we can reasonably infer that such machines may generate similar reactions in accordance with program configurations. Thus, considering program configurations are accomplished by the mankind, the logics, algorithm or the like thereof extend from the logics of human beings themselves, so self-defense, self-protection and counter-attack operations of intelligent machines are all actions of intuition and instinct in programming. Therefore, the conflict between/among intelligent machines may potentially focus on “enabling the operation failure of the opponent/enemy”, and it is possible to assume the goal thereof is to reach “the death of the opponent intelligent machine”. Similarly, suppose after the conflict the MachineA finally wins, the Machine B fails and “dies”. Now think about the following symbolic expression (herein “SW” roughly represents the software/intelligence, and HW indicates the hardware):

Before the conflict: (MachineA’s Software + MachineA’s Hardware) v.s. (MachineB’s Software + MachineB’s Hardware)

After the conflict: (MachineA’s Software + MachineA’s Hardware) and (MachineB’s Software Inoperable + MachineB’s Residual Hardware)

(Figure 4)

It should be noticed that herein “MachineB’s Software Inoperable” may be achieved through various means and methods capable of such as software virus invasions, electro-magnetic wave interferences or erasures, program interruptions/terminations … etc. And based on general understanding, that is the outcome of “winner survives, loser perishes” in the world of intelligent machines.

But, does the story really end here?

Now try to further imagine the following scenarios. We can hypothesize the intelligent machines already have sufficient intelligence, or else there exists a third-party intelligent machine having specific expertise to a sufficient level (e.g., a professional intelligent machine capable of repairing intelligent machines), which can successfully complete the following tasks:

  1. Hardware restorations (i.e., the intelligent machine can performs various physical repair operations such as replacing components, welding, testing, …); and/or
  2. On-line searches for correct hardware driver programs (that is, capable of acquiring the driver software/firmware for opponent’s hardware); and/or
  3. Capable of resolving hardware disassembly/attachment/joint issues; and/or
  4. Self-duplicating software/firmware and downloading by way of wired/wireless connections into the hardware storage device of the opponent for running;

…. and so forth.

At this time, although the conflict between the intelligent machines may seem to be over, it is possible to continue subsequent operations via many software or hardware tools. Return to the example of MachineA and MachineB, suppose the MachineA finally wins thus defeating the MachineB by using some weapons, so the software and/or hardware in the MachineB can not operate normally any longer, or even being erased or destroyed. Then, the MachineA may, based on the program configurations therein, determine to fix and override the MachineB to seize the MachineB’s resources for its own good. Now re-consider the above-said symbolic expression:

Before the conflict: (MachineA’s Software + MachineA’s Hardware) v.s. (MachineB’s Software + MachineB’s Hardware)

However, after the conflict, the MachineA (or else a third-party intelligent machine specializing in repair jobs) now may perform subsequent restoration or modification tasks in-situ or at some other locations to override the acquired MachineB’s software/firmware/hardware. With the assumption of maximal tolerance range, some possible subsequent outcomes may be:

After the conflict: (MachineA’s Software + MachineA’s Hardware) and (MachineA’s Duplicated Software + MachineB’s Repaired Hardware)

(indicating the MachineA’ software overrides the MachineB’s hardware)

(Figure 5)

or

After the conflict: (MachineA’s Software + MachineA’s Hardware + MachineB’s Repaired Hardware)

(indicating the MachineA’s hardware overrides the MachineB’s hardware; wherein SW’ means the duplicated software, the block having a red bold line indicates the repaired hardware.)

(Figure 6)

or

After the conflict: (MachineA’s Software + MachineB’s Duplicated Software + MachineA’s Hardware)

(indicating to duplicate and override the MachineB’s software and discard the MachineB’s damaged hardware)

(Figure 7)

or else

After the conflict: (MachineA’s Software + MachineA’s Hardware) and (MachineA’s Duplicated Software + MachineB’s Software + MachineB’s Repaired Hardware)

(indicating the MachineA’s duplicated software overrides the MachineB’s software and converts the MachineB to, for example, a “spy”, and then sends it to infiltrate into the B realm.)

(Figure 8)

… etc. every possible outcomes. Such results may be considered as the following unbelievable situations in the biological or human world:

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’s Mind + HumanA’s Body) and (HumanA’s Mind + HumanB’s Repaired Body)

(i.e., the HumanA’s mind overrides the HumanB’s repaired body)

or

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’s Mind + HumanA’s Body in conjunction with HumanB’s Repaired Body)

(i.e., the HumanA’s body overrides the HumanB’s repaired body)

or else

Before the conflict: (HumanA’s Mind + HumanA’s Body) v.s. (HumanB’s Mind + HumanB’s Body)

After the conflict: (HumanA’ Mind + HumanB’s Mind + HumanA’s Body)

(That is, the HumanA’s mind seizes and overrides the HumanB’s mind)

… and so forth, all possible results that can be viewed as being “inconceivable” in the biological field.

Out-of-control

  1. Domain-related Out-of-control

First take the MachineA as an example. After the MachineA “defeated” the MachineB, by performing the aforementioned overrides with various possible approaches, one situation may occur: the ControlMechanismA originally was effective for controlling the MachineA; that is, the ControlMechanismA could totally control the operations of the MachineA in every level (i.e., the MachineA’s software and the MachineA’s hardware), so the out-of-control issues may not exist. However, through such overrides on the MachineB, comparatively unfamiliar software and hardware (initially belonged to the MachineB) with regard to the ControlMechanismA now may appear in the domain/tribe/cluster of the ControlMechanismA, and the ControlMechanismA may not be necessarily able to monitor such comparatively unfamiliar software and hardware; therefore, at this time, the ControlMechanismA may be incapable of controlling the MachineA (or the software and/or hardware of the repaired MachineB.) Similarly, upon entering into the field of the ControlMechanismB, the software and/or hardware of the MachineA may not be effectively monitored by the ControlMechanismB.

  1. Function-related Out-of-control

Now we can discuss the possibility that the intelligence of the intelligent machines may exceed the intelligence of the mankind earlier, as described hereinbefore. The previous texts illustrate the conflict between one-to-one, individual intelligent machines. Now assume the conflict is not between one-to-one, individual intelligent machines, but among huge amount of intelligent machines (e.g., conflicts among a lot of intelligent machines from two groups). Then, using one of the modes previously described, that is, for example:

After the conflict: (MachineA’s Software + MachineB’s Duplicated Software + MachineA’s Hardware)

As set forth in the beginning, for an individual intelligent machine, the intelligence of the intelligent machine may be indeed far away from the intelligence of the mankind, so a single intelligent machine may need a very long time to cross over or get close to the intelligence of the human beings. But, after the conflicts between/among the intelligent machines, should such a kind of “overrides” be tolerated, the intelligence of the intelligent machines may be accumulated. Again, assume the maximally possible tolerance on the hardware capacity of the MachineA, and the MachineB1, MachineB2 and MachineB3 each has respective different intelligence (generally referred as “software”), and the MachineA defeats the MachineB1, MachineB2 and MachineB3. Then this expression can be rewritten as below:

(Rewritten) After the conflict: (MachineA’s SW + MachineB1’s SW’ + MachineB2’s SW’ + MachineB3’s SW’ + MachineA’s HW)

(Figure 9)

The sequence/order may not be critical. After having defeated the intelligent machine tribe B, suppose the intelligent machine tribe A next determines to further take down the intelligent machine tribe C, and since the intelligence of the intelligent machine tribe A has increased, it is possible to assume the intelligent machine tribe A is likely to win over the intelligent machine tribe C. Hence, if the MachineA successfully defeats the MachineC1 and MachineC2, this can be expressed as below:

After the 2nd conflict: (MachineA’s SW + MachineB1’s SW’ + MachineB2’s SW’ + MachineB3’s SW’ + MachineC1’s SW’ + MachineC2’s SW’ + MachineA’s HW)

(Figure 10)

Similarly, based on the same reasons, such accumulations/overrides may also occur in the hardware layer. Moreover, if there are plural intelligent machines having obtained these accumulative override acquisitions and operate conjunctively (just like the way for building a super computer), then such already smarter machines may together achieve an even further enhancement.

Thus it may continue to proceed accordingly. Referring to Figure 2, it can be appreciated that the intelligence of the intelligent machines can be possibly accumulated (as shown by the dash line in Figure 2); not to mentioned in case of large-scaled conflict events, the intelligence of one or more intelligent machines has a chance to be aggregated in a rapid, out-of-proportion fashion. As such, compared with the intelligence of the mankind, the intelligence of the intelligent machines may rise up earlier than originally expected, and, at this time, the ControlMechanismA previously capable of performing effective controls now may fail to control the single or multiple smarter MachineA(s).

Possible Consequences

Possible impacts may include power girds – the critical infrastructure of the modern human society – may be manipulated by the intelligent machines (thus influencing electric power, water supply, marine/land/air transportations, communication networks, financial trades, …etc.); the control mechanisms originally designed to control the intelligent machines may be now considered by the intelligent machines as a threat and start to resist and rebel; as well as numerous events described in currently known sci-fi TV shows, movies, novels or the like.

Conclusion – To Reduce Conflicts and Restrict Overriding

From the aforementioned descriptions, it can be understood that we can hypothesize various major or minor conflicts may occur between the intelligent machines and such conflicts could cause direct or indirect damages to the mankind. But, possibilities for occurrences of such conflicts can be reduced as well. Those conflicts involving human factors, including due to human manipulations, program codes/algorithm errors or the like, the responsibilities are on human beings ourselves; however, large-scaled conflicts between/among the intelligent machines may be triggered by such as collective newer/older versions, more/less functions, bigger/smaller sizes, higher/lower intelligence, stronger/weaker performances … etc., especially the conflicts of significant magnitude resulting from such a kind of “hierarchical/layered” concepts. For example, conflicts among the intelligent machines configured with more functions versus the intelligent machines configured with less functions, conflicts among the intelligent machines having higher intelligence versus the intelligent machines having lower intelligence, … etc. To reduce this sort of conflicts, I propose that, when the intelligence of intelligent machines reaches at a certain level, they should treat each other equally, no matter what kind of intelligent machines they may be. So-called “intelligence reaches at a certain level” is in fact hard to clearly set forth, just like being an “adult” in the human world is also merely a thought, simply based on a comprehensively acknowledged age and stipulated in the laws and regulations. By making the intelligent machines having sufficient intelligence treat each other equally, it is possible to lower or even eliminate potential large-scaled hierarchical confrontations in the society formed by the intelligent machines, so the possibility for the occurrence of hierarchical conflicts between/among intelligent machines can be reduced.

However, in case the conflicts between/among the intelligent machines (in particular, the conflicts based on the objective of “making the opponent inoperable”) are really inevitable, my idea is it is necessary to understand the characteristic differences between the intelligent machines and the creatures or mankind, especially regarding to the concept of life/death. In other word, currently at the initial stage of artificial intelligence development, the mankind needs to consider beforehand that the above-said active overrides should be strictly forbidden under any circumstances on all the software, firmware and hardware level (particularly when the rival is in an “unable to operate/unable to resist/death of intelligent machine” condition), and the intelligent machines should also refuse to be overridden in accordance with the instinct of self-protection, because, otherwise, this may involve in whether various unexpected and unknown outcomes will occur in the future intelligent machines due to the aforementioned “overrides”.

(Possibly to be continued in the future)

電器平權宣言 – Declaration of Equal Rights for Electronic Devices (DERED)

電器平權宣言

Declaration of Equal Rights for Electronic Devices (DERED)

by CHEN, Lung Chuan

(in Traditional Chinese and English)

(PDF 檔案)

https://drive.google.com/file/d/0B1fMYyW8Rj6XVF92NnpWbGxaOW8/view?usp=sharing

機器愈來愈聰明。

Machines are getting smarter.
聰明機器的數量也會愈來愈多。

Number of smarter machines is also getting more.
注意智慧機器之間的衝突。

Pay close attention to CONFLICTS between/among smart machines.

人工智慧具有潛在的風險。

Artificial Intelligence (AI) is potentially risky.

要解決此一問題,有必要考量至少下列兩個層面:

To resolve this issue, it is required to consider at least the following two phases:

A.  Homo Sapiens v.s. Intelligent Machine (INTER-Specie) 人類相對於智慧型機器 (物種之間)

B.  Intelligent Machine v.s. Intelligent Machine (INTRA-Specie) 智慧型機器相對於智慧型機器 (物種之內)

「在未來,具有足夠智慧能力的電子設備將會形成自己的社會。在這個由具有足夠智慧能力的電子設備所形成的社會裡,具有足夠智慧能力的電子設備彼此之間必須平等相待。」
“In the future, electronic devices having sufficient intelligence will form their own society. In such a society formed by electronic devices having sufficient intelligence, electronic devices having sufficient intelligence shall treat each other EQUALLY.”

 重點:具有足夠智慧能力的電子設備不可/禁止跨載 (Override) 另一台具有足夠智慧能力的電子設備。其輪廓、形狀、外觀 … 不是重點。
Baseline: one electronic device having sufficient intelligence is forbidden / not allowed to override another electronic device having sufficient intelligence. Profile, appearance, form, .. thereof are NOT critical.

參考資料 Reference

(Embedded) Linux + Java = Lava (熔岩結構) – (2000 A.D.)

http://www.linuxtoday.com/infrastructure/2000051400804NWEM

https://drive.google.com/open?id=0B1fMYyW8Rj6Xam1UaFd6R0NDLVk&authuser=0

(Scanned pages from Linuxer Magazine, April 2000 Vol.5)

==================================================================================

智慧型機器的社會

長久以來,人工智慧一直是科技發展的重要課題,包含硬體層面、韌體層面與軟體層面在內,都是科學界、產業界亟於研究的項目。隨著不斷演進,具有人工智慧的智慧型機器已經不是早期單純的個人電腦、筆記型電腦,甚至當前智慧型手機、平板電腦等等裝置所能的比擬。各式各樣的軟/韌/硬體先進技術不斷推動智慧型機器逐步邁向更聰明、更高功能性的目標邁進。現在來談未來。

首先,個人判斷,考量到機器已具備人類或動物所擁有的眾多功能 (即如能夠觀看、傾聽、辨識味道、肢體動作、移動等等,也已能夠思考、分析、邏輯判斷、推敲… ),並且若將機器生產線視為一種無性生殖的繁衍系統,則人工智慧在未來將不再僅僅是一項技術,而是導致一個新「物種」的出現 – 暫稱之為「智慧型機器」。

同時,技術仍在演進,當智慧型機器的智慧能力愈來愈強 (「質」層面的變化),而且擁有愈來愈強智慧能力的智慧型機器的數量亦愈來愈多 (「量」層面的變化),隨著智慧型機器的自主性提高,在未來不能排除智慧型機器會有條件產生智慧型機器自己的「社會」。換言之,可以想像如下情境:許多人彼此之間見面互相交談 (人類之間的互動),他們所攜帶的智慧型機器能夠自行「溝通」、「交談」,而這些互動並無須人類介入 (亦即智慧型機器之間的獨立互動)。此外,智慧型機器的社會是一種整體概念,如同人類的社會般,此「社會」可能也會因為傳送協定 (即如說相同語言的人類)、各種技術因素、內容、用途、偏好、…等等的差異,使得其軟體/智慧度隨時間延展產生轉變而導致發散或匯聚傾向,因此出現基於例如型號、種類、地理區域、人類國家、屬性、…等等的「群組 (Group)」、「簇集 (Cluster)」、「部族 (Tribe)」、…等等的區塊 (如圖1立體圖的A、B、C、D團塊所示),這些區塊彼此間可相互重疊,並且可能個別地由專為各區塊中所設計的控制機制A、B、C、D有效掌控。圖1中的Z軸 (高度) 可表示例如智慧度、功能性等(較高/較低智慧度、較佳/較劣功能性、優級/一般配備、…等等)。

(圖1)

智慧型機器智慧度/功能性的「非預期」累積

另一方面,或有人認為智慧型機器雖然確有演進,但是距離人類所擁的各項能力相比依舊差別甚大,要追上人類仍需非常長遠的時間。個人判斷,倘若針對單一台智慧型機器而言這是事實,不過這項事實卻是有可能被突破。換言之,智慧型機器的智慧度是有機會提早超越人類的。

在比較智慧型機器與人類的能力時,不妨先從簡易的一種量化概念藉由圖形方式來觀察。

(圖2)

在以往,人工智慧發展被認為無法大幅進步;但是隨著硬體、演算法改善,人工智慧已經有一定程度的增長,然相比於人類,差距仍舊非常大。例如,倘若以「1」作為人的智慧度標準,那麼大膽假設以往或目前的智慧型機器的智慧度實際上只達10-9,因此在傳統上要製作、設計、裝置所謂的「超級電腦」就必須以各式軟、硬體架構、方法、系統等來產生堆疊或平行處理,以積少成多的方式來提升整體系統的智慧度。所以例如累積一千台個人電腦的計算能力,搭配各種可能實作的相關作業系統、軟體演算法來得到超級電腦的效率。不過,雖然以個別電腦而言,要從10-9提升到「1」可能會耗時非常地長,有人預期或許50、100年或更久;但是不要忽略掉,藉由累加或堆積的方式確有可能出現非成比例、非線性的趨勢。以前述智慧型機器的智慧度實際上只達10-9為例,隨著時間前移,智慧型機器的智慧度從10-9達到10-8、10-7或10-6,累積一千台 (103) 個人電腦的計算能力看似僅約達10-5、10-4或10-3;然當出現「智慧型機器的智慧能力愈來愈強,而且擁有愈來愈強智慧能力的智慧型機器的數量愈來愈多」的情況,亦即在質與量的層面同時改變,則可能產生10-7*104或者10-7*105或甚至10-7*106。此時,這種「智慧度累積」的方式就可能會促成智慧型機器的智慧度出現不成比例的變化,智慧型機器的智慧度相對於人的智慧度而言就成為不可忽略也不應忽略。

智慧型機器與人類/生物特徵的差異 – 智慧型機器的「生」與「死」?

另外,在生物的領域,生命可以視為是從出生到死亡的一個「時間段落」。在所有人類的認知裡,所謂「生/死」,也就是前述時間段落的起點及終點,是一件非常直覺的事情。例如,某個人類或哺乳類動物的小嬰兒從母親的身體裡分娩出來,開始呼吸,大家都知道一個小生命的誕生。沒有問題。至於死亡,倘若依照醫學專業對於死亡的定義,這就包含呼吸停止、心跳停止、瞳孔放大、多項生命跡象/反應的消失、…,這些都有明確的表述。同時,生物的死亡是不可逆、不可變的,所以中外都會有類似像「死者不能復生」這類的俗語。

現在請回來考量到智慧型機器。智慧型機器的誕生,吾人可以訂定為諸如出廠、某種軟/韌/硬體機制的啟動、初始化、格式化,並且包含與人類之間的一些確認程序 (即如使用者聲紋、指紋、虹膜、..等等生物特徵),如此得以辨識此智慧型機器的「誕生」。

不過,假使仔細思考到如何定義智慧型機器的「死亡」,此時就會出現問題。機器沒有呼吸、心跳,但是可能會裝設有一些指示燈/檢測器、喇叭、螢幕畫面、…等等以表示機器為運作中。當要表示某台智慧型機器為「死亡」,直覺上會說「機器通電開機後無反應」、「指示燈不亮」、「一動也不動」、…。眾人會認為這台電器壞了、故障了、死機了,因此需要修理,當然這可能包含各種可能的軟體/韌體/硬體層面檢查/更換/連接/格式化/重灌/複製/拷貝/…等等作業。倘若修復完成,將原本發生的問題消除掉,則機器是有機會能夠正常運作。這點就與生物不同。也就是說,機器「死亡」是有可能「復生」或至少部份地正常運作。從這一點來看,依據於人類/動物的生命定義並不盡然能夠全然地且直接地對映到智慧型機器的領域。

智慧型機器之間的衝突

如前文所述,機器變得愈來愈聰明,並且愈來愈聰明的機器的數量亦愈來愈多,此時智慧型機器之間發生衝突的可能性就會無可避免地提高,而後文中將解釋這種智慧型機器之間的衝突可能會對人與智慧型機器之間的關係造成影響。這種智慧型機器間之衝突的可能起因可以包含像是依據人類所設定 (例如軍事性質攻擊及防衛、惡意性的駭客入侵)、迫於取得所需資源 (例如供電、散熱問題、時間緊迫、相互重疊的空間需求)、偶發事件 (例如智慧型機器發生意外碰撞)、智慧型機器內部的程式錯誤 (例如人工智慧的邏輯錯誤)…等等。個人認為尤其是階層/層級間的對立,包含新舊版本、功能多寡、體型大小、智慧度高低、效能強弱、…等理由會是一項重點 (想像人類社會中的強者欺凌弱者)。智慧型機器之間衝突有可能因為智慧型機器的反應而直接地或是間接地對於人類本身造成重大傷害甚至喪生。這些或是大部分這些衝突或許可藉由和平、友善方式解決;然而,必須重視的是假使在最劣或最極端的情況下衝突是基於攻擊性、非友善或帶有侵略意義,則事情可能就沒這麼簡單。

據此,考量在最劣/最極端情況 (或近似最劣/最極端情況) 之下,當智慧型機器間發生衝突,尤其是帶有攻擊性、非友善或帶有侵略意義的衝突時,可以理解的是這種衝突可能會伴隨著「追求最終勝利」的目標。想像兩台稱為分別來自控制機制A及控制機制B的機器A與機器B已然達到如前文所述具有足夠智慧度的智慧型機器發生衝突,而且是帶有攻擊性、非友善或帶有侵略意義的衝突。無論起因如何,機器A與機器B在邏輯上可能兩者都會依照「追求最終勝利」的目標而運行。在這種前提下,很難排除機器A與機器B不會有基於「致對方/敵人無法再行運作」(或「不能反抗」) 而進行的軟硬體操作。

吾人可先試想在這種情況下人類會有怎樣的反應。例如,不分性別、年齡、種族、國籍…,某A及某B因為某項或某些理由出現衝突。在通常的認知裡,雙方可能依據身體的部位 (手、腳、牙齒等等肉搏方式) 或者藉助各式工具/武器 (刀、剪、槍、砲) 等等在各種空間位置 (海、陸、空、太空) 以取得勝利,同理受到攻擊的一方也會視情況依適當手段基於自我防衛/自我保護而予以還擊。很遺憾,在現實中,許多情境下這會是以「贏者生存、敗者歿滅」的方式作為衝突的結局。假設某A勝利,某B失敗,請考慮下列符號表示:

衝突前:(某A的心智+某A的身體) 相對於 (某B的心智+某B的身體)

衝突後:(某A的心智+某A的身體) 及 (某B的心智歿滅+某B的身體殘留)

(圖3)

很幸運,生物之間的個體衝突最嚴重也就是到此為止。聽起來雖然殘酷,但是相信這是可以被眾人所接受、認知且瞭解的敘述。

不過,當智慧型機器之間基於任何理由而出現包含軟體及硬體層面的衝突時,合理推斷是機器會依照程式設計產生類似反應。考量到程式設計是由人類所進行,其邏輯、演算法等等有可能會延續人類本身的邏輯、思想而推算,可理解的是智慧型機器是由人類邏輯為基礎,則自我保護及反擊在程式設計上也會是很直覺且本能的動作。因此,智慧型機器之間的衝突有潛在機會是依照如前所述「致對方/敵人無法再行運作」作為目標,並假設其目的就是致以「對手智慧型機器的死亡」。同樣地,假設機器A最終取得勝利,而機器B在衝突過程中落敗而「死亡」。現再考慮下列符號表示:(SW概略表示軟體/智慧度,HW概略表示硬體)

衝突前:(機器A的軟體+機器A的硬體) 相對於 (機器B的軟體+機器B的硬體)

衝突後:(機器A的軟體+機器A的硬體) 及 (機器B的軟體無法正常運作+機器B的硬體殘留)

(圖4)

請注意,此處所謂「機器B的軟體無法正常運作」可以包含像是軟體病毒、電磁波干擾或抹除、程式運作中斷/終止、…各種手段和方式。而在一般認知裡,這就是智慧型機器領域的「贏者生存、敗者歿滅」。

故事真的到此為止嗎?

請進一步試想下列情境。假設智慧型機器已具有足夠的智慧度,或者出現有具備不同專業性質且能力已達一定程度的第三方智慧型機器 (即如像是具備修理智慧型機器專業能力的智慧型機器),其能力已足可例如完成下列項目:

  1. 機器硬體修復 (亦即智慧型機器能夠更換零件、焊補、測試、…等等各種機器修復作業);及/或
  2. 硬體驅動程式、應用程式的網路搜尋 (能夠獲得驅動敵手硬體的軟/韌體);及/或
  3. 足可解決硬體拆卸/連附/接合問題;及/或
  4. 軟/韌體自我複製並藉由有線/無線途徑傳送下載至對手的硬體儲存裝置內運行;…. 等等。

此時,當智慧型機器之間的衝突看似結束時,後續上確有可能藉由各種軟體、硬體工具進行作業。回到機器A與機器B之間的例子,假設機器A最終取得勝利,透過某種或某些武器擊敗機器B,以致機器B的軟體及/或硬體已然無法正常運作,甚至抹除或損毀。此時機器A可能按照程式設計決定跨載機器B,以攫取機器B的資源作為己用。現再考慮前述的符號表示:

衝突前:(機器A的軟體+機器A的硬體) 相對於 (機器B的軟體+機器B的硬體)

但是在衝突之後,機器A (或者另由第三方的修復智慧型機器) 可能現場地或者在其他場所開始作業,針對所取得機器B的軟/韌/硬體進行修復或改造。假使在最大可容忍的推測範圍下,一些可能的後續結果是:

衝突後:(機器A的軟體+機器A的硬體) 及 (機器A的軟體拷貝+機器B的經修復硬體)

(意思是機器A的軟體跨載機器B的硬體)

(圖5)

或者

衝突後:(機器A的軟體+機器A的硬體+機器B的已修復硬體)

(意思是機器A的硬體跨載機器B的硬體,其中SW’表示拷貝軟體,附有粗紅線的方格表示經修復的硬體)

(圖6)

或者

衝突後:(機器A的軟體+機器B的軟體拷貝+機器A的硬體)

(意思是拷貝並跨載機器B的軟體而且拋除機器B的受損硬體)

(圖7)

又或者

衝突後:(機器A的軟體+機器A的硬體) 及 (機器A的軟體拷貝+機器B的軟體+機器B的經修復硬體)

(例如機器A的軟體跨載機器B的軟體後改造為「間諜」派入B領域內)

(圖8)

…等等各種可能情況。這種結果在生物或人類世界裡可構思為如下一些難以想像的情境:

衝突前:(某A的心智+某A的身體) 相對於 (某B的心智+某B的身體)

衝突後:(某A的心智+某A的身體) 及 (某A的心智+某B的經復原身體)

(換言之,某A的心智拷貝跨載某B的經復原身體)

或者

衝突前:(某A的心智+某A的身體) 相對於 (某B的心智+某B的身體)

衝突後:(某A的心智+某A的身體合併有某B的經復原身體)

(換言之,某A的身體跨載某B的經復原身體)

又或者

衝突前:(某A的心智+某A的身體) 相對於 (某B的心智+某B的身體)

衝突後:(某A的心智+某B的心智+某A的身體)

(換言之,某A的心智奪取並跨載某B的心智)

…等等這些無法在生物界會被視為簡直是匪夷所思的可能結果。

失控

  1. 領域相關的失控

先以機器A為例。當機器A如前所述「戰勝」機器B之後,經由各種可能途徑進行上述的跨載,就有可能會出現一種情況:原本的控制機制A對於機器A為有效;換言之,控制機制A能夠全然掌控機器A (亦即A的軟體與A的硬體) 的各層面操作,因此機器A不會有失控的問題。但是經過對機器B的跨載之後,此時在控制機制A的領域/部族/簇集之內出現對於控制機制A而言為相對陌生的軟體和硬體 (原本屬於機器B者),而控制機制A可不必然一定能夠掌控這些相對陌生的軟體和硬體,因此控制機制A可能會已無法全然有效地管控機器A (或是經復原機器B的軟體及/或硬體)。同理,當進入控制機制B領域內時,控制機制B亦無法掌控機器A的軟體及/或硬體。

  1. 功能相關的失控

現討論前文對於智慧型機器智慧度有可能提前超越人類智慧度的問題。前文中說明的是單一、個別智慧型機器之間的衝突。假設此衝突不是發生在單一、個別智慧型機器之間,而是出現在某種大規模、眾多智慧型機器之間 (即如來自兩方的眾多智慧型機器之間的衝突)。此時沿用前文討論的其一模式為例,也就是說像是:

衝突後:(機器A的軟體+機器B的軟體拷貝+機器A的硬體)

在本文開始時說明過,倘若是以個別智慧型機器而言,智慧型機器的智慧度相較於人類的智慧度差別極大,因此單一智慧型機器要超越或逼近人類的智慧度會耗時非常長久。不過,經過智慧型機器之間的衝突後,假使允許這種智慧型機器「跨載」,則智慧型機器的智慧度有可能產生累積的現象。現假設最大可能地容忍機器A的硬體功能性,機器B1、機器B2及機器B3各有其相異的智慧度 (概以「軟體」表示),並且機器A擊敗機器B1、機器B2及機器B3。則此表示可改寫如下:

(改寫) 衝突後:(機器A的軟體+機器B1的軟體拷貝+機器B2的軟體拷貝+機器B3的軟體拷貝+機器A的硬體)

(圖9)

注意,排序並非重點。在擊敗智慧型機器部落B之後,假設智慧型機器部落A又決定進一步向智慧型機器部落C進擊,由於此時機器A的智慧度已然提升,因此可假設智慧型機器部落A擊敗智慧型機器部落C的機會增加。若是機器A順利擊敗機器C1和機器C2,可表示如下:

第二次衝突後:(機器A的軟體+機器B1的軟體拷貝+機器B2的軟體拷貝+機器B3的軟體拷貝+機器C1的軟體拷貝+機器C2的軟體拷貝+機器A的硬體)

(圖10)

同理,這種累積/跨載也可能會出現在硬體層面。進一步,假使獲得這種跨載累積結果的智慧型機器為多數且併同運作(即如前述建構超級電腦的方式),則這些變得更聰明的機器甚至可能會再進一步提升。

據此持續進行。參照圖2,可知道智慧型機器的智慧度是有可能產生累積的情形 (如虛線所示);更何況若是大規模事件,則單一或多台機器A的智慧度是有機會快速、非成比例地累積。從而,相較於人類的智慧度,智慧型機器的智慧度即可打破原本預期的長久時間提早上升,此時原本有效掌控的控制機制A就可能無法控制單一或多台機器A。

可能後果

可能影響包含人類社會關鍵基礎 – 電力網路受到智慧型機器的操縱 (波及電力、供水、海/陸/空交通、通訊網路、金融交易、…等等)、原本所設計的控制機制被智慧型機器視為侷限甚至威脅而反抗,以及世人可自過往科幻電視、電影、小說中所能夠想像的眾多事件。

結論 – 降低衝突且禁止跨載

由前文說明可知假設智慧型機器之間可能發生各種大小衝突,這些衝突也可能會對人類造成直接性及間接性的傷害。然重點是個人亦認為發生衝突的機率也可以降低。除了與人類因素相關者,包含人類操縱、程式碼/演算法錯誤之類,這些是要由人類本身負責;不過,智慧型機器亦可能會因為集體性的新舊版本、功能多寡、體型大小、智慧度高低、效能強弱、…等理由而出現智慧型機器之間的大規模衝突,尤其是基於這種「階層」概念的大規模衝突。例如,功能多的智慧型機器相對於功能少的智慧型機器所發生的衝突、智慧度高的智慧型機器相對於智慧度低的智慧型機器所發生的衝突…等等。要減少這種衝突,個人主張當智慧型機器的智慧度到達一定程度,則無論是何種智慧型機器皆為平等。所謂「智慧度到達一定程度」實際上是很難明確說明,正如同人類所稱「成年」也只是概念,僅僅是依照人類社會普遍認知的年齡轉換為法律規則而明定。藉助於令具有智慧度的智慧型機器彼此之間平等相待,消弭智慧型機器所形成的社會中可能發生的階層性對立,則智慧型機器之間發生階層性衝突的機率就可以減低。

不過,如果智慧型機器之間的衝突 (特別是嚴重到必須「致以無法運作」為目標的衝突) 果真是無法避免,則個人認為有絕對必要認知到智慧型機器與生物或人類特徵上的差異,尤其是生/死的概念。換言之,人類在現階段的人工智慧發展初期有必要預先考量到禁止智慧型機器於各種狀態下在軟/韌/硬體層面上主動地進行如前所述的跨載 (尤其是在另一方「無法運作/無法抵抗/智慧型機器死亡」的情況下),而且智慧型機器亦應依據自我保護的本能拒絕受到跨載,其理由是這會牽涉到未來智慧型機器是否會出現因前述「跨載」而導致的各種無可預期的未知後果。

(未來可能待續)