The principle of Asimolar: a principle (the Guide) in the development of artificial intelligence

In the development of artificial intelligence (Artificial Intelligence, AI) should be made a guideline or a guide so as not to be abused. This is to prevent abuse against the ability of the dimilki by the artificial intelligence technology which is very unusual in studying even capable of learning better than human professional though. Artificial intelligence itself is defined as intelligence entity (the results of the human mind) that scientific [1].

Present condition of implementation of Conference "Asimolar AI Principles" 2015 in Puerto Rico (source: https://futureoflife.org/bai-2017)

In January 2015, has held a Conference to discuss artificial intelligence in Puerto Rico. In the Conference is attended by many among the Group of artificial intelligence researchers from academia and industry, a leader in the field of economic thinkers, law, ethics, and philosophy for five days which was later dedicated to how utilizing artificial intelligence in order to be fully useful for mankind. The event was also attended by Elon Musk is the CEO of SpaceX and Tesla Inc. [2].

Then with reference to the principle of Asimolar for artificial intelligence in 2015 in Puerto Rico and also the results of the Conference BAI 2017 [3], the Future of Life Institute issued the Asilomar AI Principles (Principle Asimolar for artificial intelligence). In Principles (guidelines) there are 23 important points set by experts, robotics, artificial intelligence and technology leaders to ensure the development of artificial intelligence is safe, not to be abused. Those principles had been signed by more than 2,000 people, including 844 researchers in the field of AI and robotics. The list includes CEO Elon Musk of SpaceX and Tesla Demis Hassabis, Inc. founder Stephen Hawking DeepMind Google, cosmologists, and other leading thinkers [4].

Elon Musk is currently under discussion in the Conference panel of the BENEFICIAL AI 2017 (source: https://futureoflife.org/bai-2017/) the principle of the Asimolar contains 23 points, divided into three sections covering important research (Research Issues) contains 5 points, ethics and values (Ethics and Values) contains 13 points, and the long term (Longer-term Issues) contains 5 points. As for the 23 points is indicated in the following table [5]:

  • Research Issues (Research Issues)
No.  The results of the Description
1 Purpose Of Research (Research Goal) The purpose of the study of artificial intelligence (AI) should be directed in order not made for the purpose of (military) intelligence, but intelligence that can be profitable.
2 Research Funding (Research Let) Investment in the field of artificial intelligence, funding should be used for research to ensure a beneficial use, including questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we create systems of AI in the future which is very strong, so they do what we want without being damaged or burglarized?
  • How we can grow and prosper side by side through automation while maintaining human resource and its purpose?
  • How can we update our legal system to be more fair and efficient, to align with the law on the system of AI, and to manage the risks associated with AI?
  • Value sets what should be aligned with the AI, and what legal and ethical status should it have?
3 Link policy-Science (Science-Policy links) There should be a constructive and healthy exchange between AI researchers and policy makers.
4 Cultural Research (Research Culture) A culture of teamwork, trust, and transparency must be fostered between researchers and developers of AI.
5 Evasion against siding with a race (Race Avoidance) The team developed an AI should actively work together to avoid a unilateral advantage (waiver/reduce) on safety standards.
  • Ethics and values (Ethics and Values)
No. The results of the Description
6 Security (Safety) The AI system must be safe and secure at all times and analyzed operational (proven) so where is still valid and is still viable.
7 Transparency in the event of error (Failure Transparency) If the AI system is causing harm, it should be analyzed to make sure why it could happen.
8  The Transparency Of Judicial (Judicial Transparency) Each autonomous system of judicial involvement in decision-making must give a satisfactory explanation can be audited by a competent authority of a human in his field.
9  Responsibility (Responsibility) Designer and manufacturer of sophisticated AI system holds an interest in the moral implications of the use of, misuse of, and action, with the responsibility and the opportunity to shape these implications.
10 The alignment value (Value Alignment) The AI system is highly autonomous should be designed so that the goals and behavior they can convince to align with the human values on the rest of their operation they do.
11  Human values (Human Values) AI systems should be designed and operated so that it is compatible with the ideals of human dignity, rights, freedoms, and cultural diversity.
12 Personal Privacy (Personal Privacy) Humans should have the right to access, manage, and control the data that they generate, given the power of AI to analyze and utilize that data.
13  Liberty and privacy (Liberty and Privacy) Application of AI on personal data must not be unreasonable in limiting human freedom is real or can be felt.
14 The Benefits Of Shared (Shared Benefit) AI technology should benefit and empower the people as much as possible.
15  Shared Prosperity (Shared Prosperity) Economically the prosperity created by the AI should be shared widely, to benefit all mankind.
16 Human Control (Human Control) Man must choose to how and whether to delegate decisions to the AI system in achieving the objectives of mankind.
17  Non-Subversion (authority) (Non-subversion) The ability given to a very advanced AI system (update) should be respected and enhanced, not to subvert the social and civic process where reliance on public health.
18 Arms race using AI (AI Arms Race) An arms race using a lethal autonomous weapons should be avoided.
  • The issue of long term (Longer-term Issues)

No. The results of the Description
19 Attention to the capabilities (Capability Caution) Because there is no consensus, we should avoid strong assumptions about the limits that are not limited to the ability of the AI in the future.
20 Important (Importance) Continuation of the AI could represent a major change in the history of human life on earth so that it must be planned and managed carefully matching resources.
21 Risks (Risks) The risk posed by the AI system, especially existential or disaster risk, should be subject to planning and mitigation efforts commensurate with the expected impact.
22 Enhance the capabilities of recursively (Recursive self-improvement) The AI system designed to improve itself recursively or replicate themselves in ways that could lead to an increase in the ability of rapid or quantities should be subject to security measures and strict controls.
23 The Common Good (Common Good) Superintelligence (super intelligent) only developed in the service of the ideals of the widely shared ethical and in the interest of the entire human race is not one country or organization only.

Based on 23 points from Psinsip Asimolar to the development of artificial intelligence we are expected to be ready to receive the artificial intelligence and even coexist with the artificial intelligence in the future. So as to minimize the risks posed by things we don't want, because intelligent systems will be more intelligent in the future. Expected humans can live normally and side by side with intelligent machine (Robot) or intelligent systems Intelligent System which is not mengugguli or master man as in the film the terminator.

  Here's the video conference on 6-8 January 2017 in a discussion of the utilization of artificial intelligence, References:

  1. Wikipedia Indonesia, artificial intelligence (https://id.wikipedia.org/wiki/Kecerdasan_buatan) retrieved on December 8, 2017.
  2. Future of Life, BENEFICIAL AI 2017 (https://futureoflife.org/bai-2017) retrieved on December 8, 2017.
  3. _ _ _ _ _ _ _ _ _, A Principled AI Discussion in Asilomar (https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/) retrieved on December 8, 2017.
  4. Crowe, Steve. 2017. "Asilomar AI Principles: 23 Tips for Making Safe AI". Robotics Trends, 3 February 2017 (http://www.roboticstrends.com/article/asilomar_ai_principles_23_rules_for_making_ai_safe/Artificial_Intelligence) retrieved on December 8, 2017.
  5. The future of Life, the ASILOMAR AI PRINCIPLES (https://futureoflife.org/ai-principles/) retrieved on December 8, 2017.

Artikel Berhubungan:

Sponsor Warstek.com:

Leave a Reply

Your email address will not be published. Required fields are marked *