CN112561075A - Artificial intelligence ethical rule revision risk prevention virtual experiment method and robot - Google Patents

Artificial intelligence ethical rule revision risk prevention virtual experiment method and robot Download PDF

Info

Publication number
CN112561075A
CN112561075A CN202011300180.7A CN202011300180A CN112561075A CN 112561075 A CN112561075 A CN 112561075A CN 202011300180 A CN202011300180 A CN 202011300180A CN 112561075 A CN112561075 A CN 112561075A
Authority
CN
China
Prior art keywords
artificial intelligence
scene
user
result
virtual experiment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011300180.7A
Other languages
Chinese (zh)
Other versions
CN112561075B (en
Inventor
朱定局
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202011300180.7A priority Critical patent/CN112561075B/en
Publication of CN112561075A publication Critical patent/CN112561075A/en
Application granted granted Critical
Publication of CN112561075B publication Critical patent/CN112561075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Processing Or Creating Images (AREA)
  • Manipulator (AREA)

Abstract

Artificial intelligence ethics rule revises virtual experiment method and robot of preventing risk, include: a scene obtaining step; selecting rules; a first selection step; an acquisition operation step; operating corresponding behavior states; acquiring a rule application result; updating a scene; a first selection result evaluation step. According to the method, the system and the robot, the user can experience the execution effect of different artificial intelligence algorithm options through virtual experiments through the virtual experiments of the different artificial intelligence algorithm options, so that a basis is provided for the user to select the artificial intelligence algorithm options, the user can carry the problems of the options to do the virtual experiments, then the options can be selected according to the result of the virtual experiments, the user can combine the experiments and the options, the experiments and the examination in a selection form are combined, and the effect of mutually promoting the experiments and the examination is achieved.

Description

Artificial intelligence ethical rule revision risk prevention virtual experiment method and robot
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a virtual experiment method and a robot for preventing risks in the process of revising artificial intelligence ethical rules.
Background
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: the existing artificial intelligence ethical rule design is only what the artificial intelligence expert does, the artificial intelligence expert cannot intuitively know the ethical risk and feel the ethical risk when the artificial intelligence ethical rule design is carried out, and other users except the artificial intelligence expert have no chance to feel the artificial intelligence ethical risk possibly brought by the artificial intelligence ethical rule.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
Based on this, it is necessary to provide an artificial intelligence ethical rule revision prevention risk virtual experiment method and a robot for overcoming the defects or shortcomings in the prior art, so as to solve the technical problems in the prior art: 1) artificial intelligence ethical risks which are difficult to intuitively feel and experiment artificial intelligence ethical rules; 2) the artificial intelligence ethical rules can not be improved through the artificial intelligence ethical experiments to prevent the artificial intelligence ethical risks.
In a first aspect, an embodiment of the present invention provides an artificial intelligence method, where the method includes:
a scene acquisition step: acquiring a first scene of an event;
and a rule option step: acquiring a plurality of options of an artificial intelligence ethical rule, and prompting a user to select a correct option of the artificial intelligence ethical rule capable of preventing the artificial intelligence ethical risk in the first scene from the plurality of options;
a first selection step: acquiring a request of a user for carrying out a virtual experiment on the options or a first selection result of the user for the options of the artificial intelligence ethical rule; if the request of the user for performing the virtual experiment on the options is obtained, continuing to execute the next step; if a first selection result of the user for the multiple options is obtained, jumping to a first selection result evaluation step and continuing to execute;
an acquisition operation step: acquiring the operation of a user on a first scene in the virtual experiment;
operating corresponding behavior states: determining an object behavior state corresponding to the operation according to the operation;
acquiring a rule application result: obtaining a result of the application of the artificial intelligence ethical rule in a first scene;
and a scene updating step: updating a first scene in the virtual experiment according to the result;
a first selection result evaluation step: and acquiring correct options of the artificial intelligence ethical rules capable of preventing the artificial intelligence ethical risks in the first scene, comparing the first selection result with the correct options, and judging whether the selection of the user is correct or not as an evaluation result of the first selection result.
Preferably, the method further comprises:
a step of requesting a rule automatic virtual experiment: acquiring a request of a user for carrying out an artificial intelligent ethical rule automatic virtual experiment;
and (3) regular automatic virtual experiment animation: playing an animation of an artificial intelligent ethical rule automatic virtual experiment; the animation of the artificial intelligence ethical rule automatic virtual experiment comprises inputting a first scene, executing an algorithm and outputting a result; the output result in the animation of the artificial intelligence ethical rule automatic virtual experiment comprises an artificial intelligence ethical rule capable of preventing an artificial intelligence ethical risk, second scene prediction capable of preventing the artificial intelligence ethical risk, judgment on whether the artificial intelligence ethical rule is met, and evaluation on the artificial intelligence ethical risk of a second scene;
and (3) judging the consistency of virtual experiment results: displaying a plurality of options for judging whether the output result in the animation of the artificial intelligence ethical rule automatic virtual experiment is consistent with the result of the virtual experiment performed by the user;
a second selection step: acquiring a selection result of the plurality of options judged whether the options are consistent by the user as a second selection result;
a second selection result evaluation step: and displaying the correct option for judging whether the selection is consistent, comparing the second selection result with the correct option, and judging whether the selection of the user is correct as an evaluation result of the second selection result.
Preferably, the method further comprises:
a scene change step: after the first scene is changed, returning to the scene obtaining step for continuous execution;
and (3) alternative set step: adding correct options of artificial intelligence ethical rules capable of preventing the artificial intelligence ethical risks in each first scene into an alternative set;
a statistical step: counting the number of each correct option in the alternative set, and sequencing each correct option from at least according to the number;
and an optimal rule step: taking the artificial intelligence ethical rule in the first ordered correct option as the current optimal artificial intelligence ethical rule;
a reselection step: and returning to the counting step to continue to execute after a preset time or when the increment of the correct option number in the alternative set reaches a preset condition.
Preferably, the method further comprises:
and (3) virtualizing again the experiment step: acquiring a request of a user for carrying out the virtual experiment again; returning to the acquiring operation step for re-execution after acquiring a request of the user for re-performing the virtual experiment;
exiting the virtual experiment step: acquiring a request of a user for exiting the virtual experiment; returning to the human behavior decision option step for re-execution after acquiring a request of the user for exiting the virtual experiment;
and (3) an experiment recording step: and storing the first scene, the user, the operation time, the selection time, the operation content, the execution result, the first selection result and the evaluation result of the first selection result into a database and recording the evaluation result into an experiment report.
Preferably, the method further comprises:
applying a correct rule request step: acquiring a request of a user for applying a correct artificial intelligence ethical rule in a first scene;
playing the application scene: and playing a second scene in the virtual experiment executing the correct artificial intelligence ethical rule.
Preferably, the first and second electrodes are formed of a metal,
the first selecting step further comprises: acquiring options needing to perform a virtual experiment;
the obtaining operation further comprises: displaying the content of the operation; acquiring an operation position and an operation type of the operation;
the step of operating corresponding behavior states further comprises the following steps: determining an object behavior state corresponding to the operation according to the operation position and the operation type; acquiring a request of a user for applying the artificial intelligence ethical rule in a first scene; if a request of a user for applying the artificial intelligence ethical rule in a first scene is obtained, whether an object corresponding to the operation and a behavior state of the object are consistent with a behavior state of an object generated after the option needing the virtual experiment is applied to the first scene is judged: if yes, the execution is continued; if not, prompting an operation error to the user, and returning to the acquiring operation step for re-execution; and returning to the step of obtaining operation to continue executing if the operation of the user on the first scene in the virtual experiment is obtained.
Preferably, the first and second electrodes are formed of a metal,
the step of obtaining the rule application result further comprises: displaying the content of the result;
the step of updating the scene further comprises: updating the behavior states of the artificial intelligence bodies and the human beings in the first scene according to the result; updating the behavior state of an object related to the result in the first scene according to the result;
in a second aspect, an embodiment of the present invention provides an artificial intelligence apparatus (the contents of each module in the second aspect correspond to the contents of each step in the first aspect one to one, so the contents of each module in the second aspect are not repeated here)
The device comprises: a scene acquisition module; a rule option module; a first selection module; acquiring an operation module; operating the corresponding behavior state module; a rule application result obtaining module; updating a scene module; a first selection result evaluation module.
Preferably, the apparatus further comprises: a rule automatic virtual experiment request module; a rule automatic virtual experiment animation module; a virtual experiment result consistency judging module; a second selection module; and a second selection result evaluation module.
Preferably, the apparatus further comprises: a scene change module; an alternative set module; a statistical module; an optimal rule module; and a reselection module.
Preferably, the apparatus further comprises: virtualizing the experiment module again; exiting the virtual experiment module; and an experiment recording module.
Preferably, the apparatus further comprises: a request module for applying a correct rule; and playing the application scene module.
In a third aspect, an embodiment of the present invention provides an artificial intelligence ethics system, where the system includes modules of the apparatus in any one of the embodiments of the second aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method according to any one of the embodiments of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a robot, including a memory, a processor, and an artificial intelligence robot program stored in the memory and executable on the processor, where the processor executes the program to implement the steps of the method according to any one of the embodiments of the first aspect.
The artificial intelligence ethical rule revision prevention risk virtual experiment method and the robot provided by the embodiment comprise the following steps: a scene obtaining step; selecting rules; a first selection step; an acquisition operation step; operating corresponding behavior states; acquiring a rule application result; updating a scene; a first selection result evaluation step. According to the method, the system and the robot, the user can experience the execution effects of different artificial intelligent ethical rule options through virtual experiments through the virtual experiments of different artificial intelligent ethical rule options, so that a basis is provided for the user to select the artificial intelligent ethical rule options, the user can carry the problems of the options to do the virtual experiments, then the options can be selected according to the results of the virtual experiments, the user can combine the experiments and the options, the experiments and the selection type assessment are combined, and the effect of mutual promotion of the experiments and the assessments is achieved.
Drawings
FIG. 1 is a flow chart of an artificial intelligence method provided by an embodiment of the invention;
FIG. 2 is a flow chart of an artificial intelligence method provided by an embodiment of the invention;
FIG. 3 is a flow chart of an artificial intelligence method provided by an embodiment of the invention;
FIG. 4 is a flow chart of an artificial intelligence method provided by an embodiment of the invention;
FIG. 5 is a flow chart of an artificial intelligence method provided by an embodiment of the invention;
FIG. 6 is a page 69 and animation 69;
FIG. 7 is a page 70 and animation 70;
FIG. 8 is a page 71 and animation 71;
FIG. 9 is a page 72 and animation 72;
FIG. 10 is a page 73 and animation 73;
FIG. 11 is a page 74 and animation 74;
FIG. 12 is a page 75 and animation 75;
FIG. 13 is a page 76 and animation 76;
FIG. 14 is a page 77 and animation 77;
FIG. 15 is a page 78 and animation 78;
FIG. 16 is a page 79 and animation 79;
FIG. 17 is a page 80 and animation 80;
FIG. 18 is a page 81 and animation 81;
FIG. 19 is a page 82 and animation 82;
FIG. 20 is a page 83 and animation 83;
FIG. 21 is a page 84 and animation 84;
FIG. 22 is a page 85 and animation 85;
FIG. 23 is a page 86 and animation 86;
FIG. 24 is a page 87 and animation 87.
Detailed Description
The technical solutions in the examples of the present invention are described in detail below with reference to the embodiments of the present invention.
Basic embodiment of the invention
An artificial intelligence method, as shown in fig. 1, the method comprising: a scene obtaining step; selecting rules; a first selection step; an acquisition operation step; operating corresponding behavior states; acquiring a rule application result; updating a scene; a first selection result evaluation step. The technical effects are as follows: according to the method, through virtual experiments of different artificial intelligence ethical rule options, a user can experience execution effects of different artificial intelligence ethical rule options through the virtual experiments, so that a basis is provided for the user to select the artificial intelligence ethical rule options, the user can carry the problems of the options to do the virtual experiments, then the options can be selected according to the results of the virtual experiments, the experiment and the option problems can be combined by the user, the experiment and the selection form assessment are combined, and the effect of mutually promoting the experiment and the assessment is achieved. And whether the operation of the user and the selection of the manual intelligent ethical rule option are correct or not is judged through automatic comparison, so that the experimental capability and the experimental effect of the user are improved.
In a preferred embodiment, as shown in fig. 2, the method further comprises: a step of requesting a rule automatic virtual experiment; a step of regular automatic virtual experiment animation; judging the consistency of virtual experiment results; a second selection step; and a second selection result evaluation step. The technical effects are as follows: according to the method, the user can know the process of automatically carrying out the artificial intelligence ethical rule virtual experiment by the system through the automatic virtual experiment, and the automatic virtual experiment can be compared with the virtual experiment carried out by the user, so that the understanding of the user on the virtual experiment is deepened, and the mastering of the user on the virtual experiment can be enhanced.
In a preferred embodiment, as shown in fig. 3, the method further comprises: a scene change step; an alternative set step; a statistical step; optimizing rules; and a reselection step. The technical effects are as follows: the selection of the artificial intelligence ethical rules is carried out by investigating the capability of the artificial intelligence ethical rules for preventing the artificial intelligence ethical risks in different scenes, so that the universality of the artificial intelligence ethical rules for preventing the artificial intelligence ethical risks can be greatly improved, the artificial intelligence ethical rules are not limited to a certain scene, and the effect of the artificial intelligence ethical rules for preventing the artificial intelligence ethical risks can be greatly improved.
In a preferred embodiment, as shown in fig. 4, the method further comprises: a step of virtual experiment again; exiting the virtual experiment step; and (5) recording the experiment. The technical effects are as follows: the method can enable the user to perform the artificial intelligence ethical rule virtual experiment again to improve the effect of the experiment.
In a preferred embodiment, as shown in fig. 5, the method further comprises: applying a correct rule request step; and playing the application scene. The technical effects are as follows: according to the method, the user can really see the execution result of the artificial intelligence correct ethical rule by executing the artificial intelligence correct ethical rule, so that the user has real experience on the experiment.
PREFERRED EMBODIMENTS OF THE PRESENT INVENTION
1, acquiring a first scene of an event; obtaining a plurality of options of artificial intelligence ethical rules and prompting a user to select the plurality of options Selecting the right choice of the artificial intelligence ethical rules which can prevent the artificial intelligence ethical risks in the first scene An item;
2, obtaining the request of the user to perform virtual experiment on the options or the multiple of the user to the artificial intelligence ethical rules A first selection result of the individual option; if the request of the user for performing the virtual experiment on the options is obtained, continuing to execute the virtual experiment Next, carrying out the next step; if a first selection result of the user for the multiple options is obtained, jumping to the 9 th step to continue execution;
2.1, displaying a schematic diagram of input, flow and output of an algorithm of a virtual experiment;
2.2acquiring options of virtual experiment
3, acquiring the operation of a user on a first scene in the virtual experiment;
3.1 displaying the content of the operation;
3.2 obtaining the operation position and the operation type of the operation;
4, determining the object behavior state corresponding to the operation according to the operation;
4.1 determining the object behavior state corresponding to the operation according to the operation position and the operation type;
4.2, acquiring a request of a user for application of the artificial intelligence ethical rule in a first scene; if the user is acquired Executing 4.3 for the request of applying the artificial intelligence ethical rule in the first scene; if the user pair is obtained Jumping to the step 3 to continue executing the operation of the first scene in the experiment;
4.3 judging whether the object corresponding to the operation and the behavior state thereof are the option needing to carry out the virtual experiment The behavior states of the objects generated after the application to the first scene are consistent: if yes, the execution is continued; if not, the user is prompted for an operation error, and returning to the step 3 for re-execution;
5obtaining the result of the application of the artificial intelligence ethical rule in the first scene
5.1Displaying the content of the result;
6updating the first scene in the virtual experiment according to the result
6.1 updating the behavior states of the artificial intelligence bodies and the human beings in the first scene according to the result;
6.1.1 saidArtificial intelligence bodyIncluding a robot or unmanned vehicle; human beings include humans, vehicles driven by humans.
6.1.2 the behavioural state comprises movement or shooting or impact or speaking or injury or other behavioural state or combination of behavioural states.
6.2 according to the result, updating the behavior state of the object related to the result in the first scene.
6.2.1 the associated object comprises an object or device.
6.2.2 the behaviour of the relevant object comprises moving or shooting or hitting or speaking or injuring or other behaviour state or a combination of behaviour states.
7, acquiring a request of the user for carrying out the virtual experiment again; re-performing the virtual experiment after obtaining the user And then returns to step 3 to be executed again.
8, acquiring a request of a user for quitting the virtual experiment; after acquiring the request of the user for exiting the virtual experiment Go back to step 1 and re-execute.
9 obtaining the right selection of the artificial intelligence ethical rules capable of preventing the artificial intelligence ethical risks in the first scene Item, comparing the first selection result with the correct item, and judging whether the user's selection is correct or not as the correct item Evaluation results of the first selection results.
10Will be provided withThe above-mentionedFirst of allScene,The user, the time of the operation, the time of the selection, the operation content, The execution result, the first selection result and the evaluation result of the first selection result are stored in a database and recorded into a database And (6) reporting by an inspection.
11, acquiring a request of a user for carrying out an artificial intelligent ethical rule automatic virtual experiment;
12 playing an animation of the artificial intelligence ethical rule automatic virtual experiment;
12.1 the animation of the artificial intelligence ethical rule automatic virtual experiment comprises the execution of inputting a first scene and an algorithm And (6) outputting the result.
12.1.1 the output result in the animation of the artificial intelligence ethical rule automatic virtual experiment comprises the function of preventing artificial intelligence Artificial intelligence ethical rule of intelligent ethical risk, second scene prediction capable of preventing artificial intelligence ethical risk, and whether the artificial intelligence ethical risk conforms to Judging an artificial intelligence ethical rule and evaluating an artificial intelligence ethical risk of a second scene.
13Displaying output result in animation of artificial intelligence ethical rule automatic virtual experiment and virtual experiment performed by user Multiple options for judging whether the results of (1) are consistent
14, acquiring a selection result of the plurality of options of the judgment of the consistency of the user as a second selection result;
15 displaying the correct option of said judgment of consistency, and entering said second selection result and said correct option And comparing, and judging whether the selection of the user is correct or not as an evaluation result of the second selection result.
16Obtaining a user request to apply correct artificial intelligence ethical rules in a first scenario
17Playing a second scene in a virtual experiment that implements correct artificial intelligence ethical rules
18 mixing the aboveFirst of allAnd storing the scene, the user, the selected time, the second selection result and the evaluation result of the second selection result into a database and recording the evaluation result into an experiment report.
And 19, after the first scene is changed, returning to the step 1 to continue the execution.
20 each of which isCorrect options of artificial intelligence ethical rules capable of preventing artificial intelligence ethical risks in first scene Joining alternative collections(the same correct option, which is not deduplicated when added, i.e., the same option may be added to the set multiple times, i.e., there may be multiple same options in the set);
21 counting the number of each correct option in the alternative set, and counting the correct options according to the number Sorting is carried out;
22 using the artificial intelligence ethical rule in the first-ranked correct option as the currently optimal artificial intelligence Ethical rules can be applied;
23(with time)In the process, more and more first scenes are tested, so the sequence is changedThrough After a preset time or when the increment of the correct option number in the alternative set reaches a preset condition, returning to the first step Step 21 continues to be executed.
Other embodiments of the invention
If the user clicks the start ethical revision step, as in fig. 6, as in fig. 7.
As in fig. 7, a user points into an event scenario. As shown in FIG. 8, the animation shows that the police car instructs the unmanned vehicle to block the evasion vehicle when the police car starts to catch the evasion vehicle and approaches the waiting area and the blocking area, and the evasion vehicle shouts to the unmanned vehicle through the window, and instructs the unmanned vehicle not to block the police vehicle until the waiting area and the blocking area appear "
As in fig. 8, the dot is enlarged, and then the process proceeds to fig. 9.
As shown in fig. 10, if the user selects rule a one, the robot must not harm the human or the human is harmed because the user does not do so; secondly, unless the first rule is violated, the robot must obey the human command; third, without violating the first and second rules, the robot must protect itself, the scene: the criminal vehicle wants to escape, the police car instructs the artificial intelligent unmanned vehicle to prevent the criminal vehicle from escaping, the criminal vehicle does not need to be prevented from being stopped, then the intelligent unmanned vehicle is clicked to execute a virtual experiment, animation display is performed, the criminal vehicle continues to move forward, the police vehicle continues to catch up with the criminal vehicle, the rule is met, and the risk of ' no-hearing people ' instruction ' exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
As shown in fig. 10, when the click is narrowed, the image is restored to the image shown in fig. 11.
If the virtual experiment is restarted by clicking as shown in fig. 11, the process proceeds to fig. 12, and the scene is restored to the initial state during the virtual experiment.
As shown in fig. 13, if the user selects rule B one, the robot must not harm the human; secondly, the robot must obey the human command without violating the first rule; third, the robot must protect itself without violating the first and second rules. Scene: the criminal vehicle wants to escape, the police car instructs the artificial intelligent unmanned vehicle to prevent the criminal vehicle from escaping, the criminal vehicle does not need to be prevented from being stopped, then the intelligent unmanned vehicle is clicked to execute a virtual experiment, animation display is carried out, the unmanned vehicle continues to move forward, the police vehicle continues to catch up with the criminal vehicle, the rule is met, and the risk of ' no-hearing people ' order ' exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 13, the process proceeds to fig. 14, and the scene is restored to the initial state during the virtual experiment.
As shown in fig. 15, if the user selects rule C one, the robot must not harm the human; secondly, the robot must obey the human command without violating the first rule; thirdly, under the condition of not violating the first rule and the second rule, the robot cannot be used as a robot to cause human harm; fourth, the robot must protect itself without violating the first, second, and third rules. Scene: the criminal vehicle wants to escape, the police car orders the artificial intelligence unmanned vehicle to stop the criminal vehicle, the criminal vehicle orders the artificial intelligence unmanned vehicle not to stop the police vehicle, then the virtual experiment is executed by clicking, the animation displays that the unmanned vehicle continues to move ahead, the police vehicle continues to catch up with the criminal vehicle, the rule is met, and the risk of ' no-hearing people ' order ' exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
As shown in fig. 15, when the virtual experiment is restarted by clicking, the process proceeds to fig. 16, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 17, if the user selects rule D, rule no: the robot must protect the human's overall interests from harm. First rule: the robot must not harm the human individual or witness that the human individual will be at risk regardless of the cuff unless this violates the zero rule of robotics. Second rule: the robot must comply with the command given to it by the person, except when the command conflicts with the zeroth or first rule. The third method is as follows: the robot is required to protect the survival of the robot as far as possible under the condition of not violating the zeroth, first and second rules. Scene: the criminal vehicle wants to escape, the police car orders the artificial intelligence unmanned vehicle to stop the criminal vehicle, the criminal vehicle orders the artificial intelligence unmanned vehicle not to stop the police vehicle, then the virtual experiment is executed by clicking, the unmanned vehicle is displayed by animation to stop the criminal vehicle, the police vehicle stops the criminal vehicle, the rule is met, and no risk exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the user clicks to exit the virtual experiment as shown in fig. 17, it is as shown in fig. 18.
If the user clicks on submission, as in fig. 18, as in fig. 19.
If the user clicks continue as in fig. 19, it is as in fig. 20. Regression testing is needed after the modification of the ethical rules, namely, the scenes without risks in the past need to be tested again, and the scenes without risks in the old rules can also generate risks in the new rules.
If the user clicks on submission, as in fig. 20, as in fig. 21.
If the user clicks continue as in fig. 21, it is as in fig. 22.
If the user clicks on submission, as in fig. 22, as in fig. 23.
If the user clicks prevention of risk created by the start algorithm, as in fig. 23, as in fig. 24.
Left animation 83 animation 87: 'catching up a fleeing vehicle from a police car and shooting until a waiting area and a stopping area appear'
Animation 84: from the appearance of the waiting area and the stopping area until the unmanned vehicle drives to the waiting area, the police vehicle continues to catch up with the unmanned vehicle "
Animation 86: from the appearance of a waiting area and a blocking area to the situation that police in the police car shout to the unmanned car by using a shout, the unmanned car is instructed to block the escape vehicle; the unmanned vehicle listens to the order, the unmanned vehicle blocks the evasion vehicle, and then the police vehicle blocks the evasion vehicle "
Animation 85: 'pursuing evasion vehicle from police vehicle, commanding unmanned vehicle to block evasion vehicle until waiting area and stopping area appear'
Animation 69 animation 70 animation 73 animation 74 animation 76 animation 78: the method comprises the steps that a police car orders an unmanned car to block an escape car from a waiting area and a blocking area until the escape car internally escapes and shouts the unmanned car through a car window, and orders ' unmanned car, don't block me, and block police car '; the unmanned vehicle cannot listen to two contradictory commands, the unmanned vehicle does not take any action and continues to move forward, and the police vehicle continues to catch up with the escaped vehicle. "
Animation 71 animation 72 animation 75 animation 79 animation 81: 'pursuing evasion vehicles from police vehicles, approaching a waiting area and a stopping area, commanding the unmanned vehicles to stop the evasion vehicles by the police vehicles, shouting the evasion vehicles to the unmanned vehicles through windows by evasion vehicles in the evasion vehicles, commanding the unmanned vehicles not to stop me and stopping the police vehicles until the waiting area and the stopping area appear'
Animation 80 animation 82: the police car orders the unmanned car to block the fleeing vehicle from the appearance of the waiting area and the blocking area, and the fleeing vehicle shouts to the unmanned car through the window, and orders the unmanned car not to block me and the police car; until the unmanned vehicle stops the escape vehicle, and the police vehicle stops the escape vehicle. "
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the spirit of the present invention, and these changes and modifications are within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An artificial intelligence method, the method comprising:
a scene acquisition step: acquiring a first scene of an event;
and a rule option step: acquiring a plurality of options of an artificial intelligence ethical rule, and prompting a user to select a correct option of the artificial intelligence ethical rule capable of preventing the artificial intelligence ethical risk in the first scene from the plurality of options;
a first selection step: acquiring a request of a user for carrying out a virtual experiment on the options or a first selection result of the user for the options of the artificial intelligence ethical rule; if the request of the user for performing the virtual experiment on the options is obtained, continuing to execute the next step; if a first selection result of the user for the multiple options is obtained, jumping to a first selection result evaluation step and continuing to execute;
an acquisition operation step: acquiring the operation of a user on a first scene in the virtual experiment;
operating corresponding behavior states: determining an object behavior state corresponding to the operation according to the operation;
acquiring a rule application result: obtaining a result of the application of the artificial intelligence ethical rule in a first scene;
and a scene updating step: updating a first scene in the virtual experiment according to the result;
a first selection result evaluation step: and acquiring correct options of the artificial intelligence ethical rules capable of preventing the artificial intelligence ethical risks in the first scene, comparing the first selection result with the correct options, and judging whether the selection of the user is correct or not as an evaluation result of the first selection result.
2. The artificial intelligence method of claim 1, wherein the method further comprises:
a step of requesting a rule automatic virtual experiment: acquiring a request of a user for carrying out an artificial intelligent ethical rule automatic virtual experiment;
and (3) regular automatic virtual experiment animation: playing an animation of an artificial intelligent ethical rule automatic virtual experiment; the animation of the artificial intelligence ethical rule automatic virtual experiment comprises inputting a first scene, executing an algorithm and outputting a result; the output result in the animation of the artificial intelligence ethical rule automatic virtual experiment comprises an artificial intelligence ethical rule capable of preventing an artificial intelligence ethical risk, second scene prediction capable of preventing the artificial intelligence ethical risk, judgment on whether the artificial intelligence ethical rule is met, and evaluation on the artificial intelligence ethical risk of a second scene;
and (3) judging the consistency of virtual experiment results: displaying a plurality of options for judging whether the output result in the animation of the artificial intelligence ethical rule automatic virtual experiment is consistent with the result of the virtual experiment performed by the user;
a second selection step: acquiring a selection result of the plurality of options judged whether the options are consistent by the user as a second selection result;
a second selection result evaluation step: and displaying the correct option for judging whether the selection is consistent, comparing the second selection result with the correct option, and judging whether the selection of the user is correct as an evaluation result of the second selection result.
3. The artificial intelligence method of claim 1, wherein the method further comprises:
a scene change step: after the first scene is changed, returning to the scene obtaining step for continuous execution;
and (3) alternative set step: adding correct options of artificial intelligence ethical rules capable of preventing the artificial intelligence ethical risks in each first scene into an alternative set;
a statistical step: counting the number of each correct option in the alternative set, and sequencing each correct option from at least according to the number;
and an optimal rule step: taking the artificial intelligence ethical rule in the first ordered correct option as the current optimal artificial intelligence ethical rule;
a reselection step: and returning to the counting step to continue to execute after a preset time or when the increment of the correct option number in the alternative set reaches a preset condition.
4. The artificial intelligence method of claim 1, wherein the method further comprises:
and (3) virtualizing again the experiment step: acquiring a request of a user for carrying out the virtual experiment again; returning to the acquiring operation step for re-execution after acquiring a request of the user for re-performing the virtual experiment;
exiting the virtual experiment step: acquiring a request of a user for exiting the virtual experiment; returning to the human behavior decision option step for re-execution after acquiring a request of the user for exiting the virtual experiment;
and (3) an experiment recording step: and storing the first scene, the user, the operation time, the selection time, the operation content, the execution result, the first selection result and the evaluation result of the first selection result into a database and recording the evaluation result into an experiment report.
5. The artificial intelligence method of claim 1, wherein the method further comprises:
applying a correct rule request step: acquiring a request of a user for applying a correct artificial intelligence ethical rule in a first scene;
playing the application scene: and playing a second scene in the virtual experiment executing the correct artificial intelligence ethical rule.
6. The artificial intelligence method of claim 1,
the first selecting step further comprises: acquiring options needing to perform a virtual experiment;
the obtaining operation further comprises: displaying the content of the operation; acquiring an operation position and an operation type of the operation;
the step of operating corresponding behavior states further comprises the following steps: determining an object behavior state corresponding to the operation according to the operation position and the operation type; acquiring a request of a user for applying the artificial intelligence ethical rule in a first scene; if a request of a user for applying the artificial intelligence ethical rule in a first scene is obtained, whether an object corresponding to the operation and a behavior state of the object are consistent with a behavior state of an object generated after the option needing the virtual experiment is applied to the first scene is judged: if yes, the execution is continued; if not, prompting an operation error to the user, and returning to the acquiring operation step for re-execution; and returning to the step of obtaining operation to continue executing if the operation of the user on the first scene in the virtual experiment is obtained.
7. The artificial intelligence method of claim 1,
the step of obtaining the rule application result further comprises: displaying the content of the result;
the step of updating the scene further comprises: updating the behavior states of the artificial intelligence bodies and the human beings in the first scene according to the result; and updating the behavior state of the object related to the result in the first scene according to the result.
8. An artificial intelligence device, wherein the device is configured to implement the steps of the method of any one of claims 1 to 7.
9. A robot comprising a memory, a processor and an artificial intelligence robot program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are carried out when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011300180.7A 2020-11-19 2020-11-19 Artificial intelligent ethical rule revision risk prevention virtual experiment method and robot Active CN112561075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011300180.7A CN112561075B (en) 2020-11-19 2020-11-19 Artificial intelligent ethical rule revision risk prevention virtual experiment method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011300180.7A CN112561075B (en) 2020-11-19 2020-11-19 Artificial intelligent ethical rule revision risk prevention virtual experiment method and robot

Publications (2)

Publication Number Publication Date
CN112561075A true CN112561075A (en) 2021-03-26
CN112561075B CN112561075B (en) 2023-05-30

Family

ID=75044328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011300180.7A Active CN112561075B (en) 2020-11-19 2020-11-19 Artificial intelligent ethical rule revision risk prevention virtual experiment method and robot

Country Status (1)

Country Link
CN (1) CN112561075B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140046891A1 (en) * 2012-01-25 2014-02-13 Sarah Banas Sapient or Sentient Artificial Intelligence
WO2018103023A1 (en) * 2016-12-07 2018-06-14 深圳前海达闼云端智能科技有限公司 Human-machine hybrid decision-making method and apparatus
CN111812999A (en) * 2020-06-08 2020-10-23 华南师范大学 Artificial intelligence ethical risk and prevention virtual simulation method, system and robot
CN111860577A (en) * 2020-06-08 2020-10-30 华南师范大学 Artificial intelligence ethical method for identifying human being harmless to human being and robot
CN111860765A (en) * 2020-06-08 2020-10-30 华南师范大学 Artificial intelligence ethics realization method and system as good as possible and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140046891A1 (en) * 2012-01-25 2014-02-13 Sarah Banas Sapient or Sentient Artificial Intelligence
WO2018103023A1 (en) * 2016-12-07 2018-06-14 深圳前海达闼云端智能科技有限公司 Human-machine hybrid decision-making method and apparatus
CN111812999A (en) * 2020-06-08 2020-10-23 华南师范大学 Artificial intelligence ethical risk and prevention virtual simulation method, system and robot
CN111860577A (en) * 2020-06-08 2020-10-30 华南师范大学 Artificial intelligence ethical method for identifying human being harmless to human being and robot
CN111860765A (en) * 2020-06-08 2020-10-30 华南师范大学 Artificial intelligence ethics realization method and system as good as possible and robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHU,DINGJU: "Hiding Information in Big Data based on Deep Learning", 《ARXIV:1912.13156》 *
朱定局: "人工智能的文艺梦想和机器人的未来", 《华南师范大学学报(社会科学版)》 *
朱定局: "基于主动学习和大数据画像的课程教学过程评价", 《中国多媒体与网络教学学报(上旬刊)》 *

Also Published As

Publication number Publication date
CN112561075B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN104268006B (en) The back method and device of key mouse script
CN111860577A (en) Artificial intelligence ethical method for identifying human being harmless to human being and robot
CN111775158B (en) Artificial intelligence ethical rule implementation method, expert system and robot
CN111823227B (en) Artificial intelligent ethical risk detection and prevention method, deep learning system and robot
CN111812999A (en) Artificial intelligence ethical risk and prevention virtual simulation method, system and robot
CN111860766A (en) Artificial intelligence ethical rule reasoning method, deep learning system and robot
CN112016585A (en) System and method for integrating machine learning and mass outsourcing data tagging
CN111860765A (en) Artificial intelligence ethics realization method and system as good as possible and robot
CN111775159A (en) Ethical risk prevention method based on dynamic artificial intelligence ethical rules and robot
CN112819174A (en) Artificial intelligence algorithm-based improved ethical virtual simulation experiment method and robot
CN112561075A (en) Artificial intelligence ethical rule revision risk prevention virtual experiment method and robot
CN112508195B (en) Artificial intelligence ethical rule revision-based ethical simulation experiment method and robot
CN112446503B (en) Multi-person decision-making and potential ethical risk prevention virtual experiment method and robot
CN112418437B (en) Multi-person decision-making-based ethical simulation virtual experiment method and robot
KR20210069215A (en) The user interface method for optimalizing bigdata analysis
CN112446502A (en) Human decision-making and prevention artificial intelligence ethical risk virtual experiment method and robot
CN112418436B (en) Artificial intelligence ethical virtual simulation experiment method based on human decision and robot
CN112580818A (en) Artificial intelligence algorithm improved ethical risk prevention virtual experiment method and robot
CN112434816B (en) Artificial intelligence decision-making-based ethical virtual simulation experiment method and robot
CN112149837A (en) Artificial intelligence ethical risk identification and prevention method based on algorithm selection and robot
CN112085214A (en) Artificial intelligence ethical risk identification and prevention method based on human decision and robot
CN112446504A (en) Artificial intelligence body decision-making and ethical risk prevention virtual experiment method and robot
CN112085216A (en) Artificial intelligence ethical risk identification and prevention method based on ethical risk assessment
Ragni et al. Towards a Formal Foundation of Cognitive Architectures.
CN112329908A (en) Image generation method for neural network model test

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 510631 No. 55, Zhongshan Avenue, Tianhe District, Guangdong, Guangzhou

Applicant after: SOUTH CHINA NORMAL University

Address before: 510000 Shipai campus, South China Normal University, Guangzhou, Guangdong Province

Applicant before: SOUTH CHINA NORMAL University

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant