CN112418436A - Artificial intelligence ethical virtual simulation experiment method based on human decision and robot - Google Patents

Artificial intelligence ethical virtual simulation experiment method based on human decision and robot Download PDF

Info

Publication number
CN112418436A
CN112418436A CN202011300239.2A CN202011300239A CN112418436A CN 112418436 A CN112418436 A CN 112418436A CN 202011300239 A CN202011300239 A CN 202011300239A CN 112418436 A CN112418436 A CN 112418436A
Authority
CN
China
Prior art keywords
scene
decision
virtual experiment
user
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011300239.2A
Other languages
Chinese (zh)
Other versions
CN112418436B (en
Inventor
朱定局
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202011300239.2A priority Critical patent/CN112418436B/en
Publication of CN112418436A publication Critical patent/CN112418436A/en
Application granted granted Critical
Publication of CN112418436B publication Critical patent/CN112418436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An artificial intelligence ethical virtual simulation experiment method and a robot based on human decision comprise the following steps: a scene obtaining step; selecting human behavior decision; a first selection step; an acquisition operation step; a decision step of operating corresponding human behaviors; obtaining a decision execution result; updating a scene; a first selection result evaluation step. According to the method, the system and the robot, through virtual experiments of different human behavior decision options, a user can experience the execution effects of different human decision options through the virtual experiments, the effect of mutual promotion of the experiments and the assessment is achieved, automatic execution of human decisions can be achieved through a knowledge base and a deep learning model, the automatic execution result is obtained, and therefore the decision execution result and the prediction of the executed scene of the human are achieved.

Description

Artificial intelligence ethical virtual simulation experiment method based on human decision and robot
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence ethical virtual simulation experiment method and a robot based on human decision.
Background
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: the artificial intelligence ethical risk of the existing human decision and the prevention thereof can not be tested, and because the test has the artificial intelligence ethical risk, danger can be brought to the personnel participating in the test.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
Based on this, it is necessary to provide an artificial intelligence ethical virtual simulation experiment method and a robot based on human decision to solve the problem that human decision prevention ethical risk experiments in the prior art cannot be performed, aiming at the defects or shortcomings in the prior art.
In a first aspect, an embodiment of the present invention provides an artificial intelligence method, where the method includes:
a scene acquisition step: acquiring a scene of an event;
human behavior decision option step: acquiring a plurality of options of human behavior decision, and prompting a user to select a correct option of the human behavior decision which can prevent artificial intelligence ethical risk in the scene from the plurality of options;
a first selection step: acquiring a request of a user for carrying out a virtual experiment on a plurality of human behavior decisions including the plurality of options or a first selection result of the user for the plurality of options of the human behavior decisions; if a request of a user for carrying out a virtual experiment on various human behavior decisions including the options is obtained, continuing to execute the next step; if a first selection result of the user for the multiple options is obtained, jumping to a first selection result evaluation step and continuing to execute;
an acquisition operation step: acquiring the operation of a user on the scene in the virtual experiment;
operation corresponding human behavior decision step: determining a human behavior decision corresponding to the operation according to the operation;
a step of obtaining a decision execution result: retrieving an execution result corresponding to the human behavior decision from a human behavior decision execution knowledge base; the human behavior decision execution knowledge base comprises a corresponding relation between a human behavior decision and an execution result; if the retrieval fails, inputting the human behavior decision into a decision virtual experiment model, and taking the output of the decision virtual experiment model as the execution result;
and a scene updating step: retrieving a scene corresponding to the executed scene state in the execution result from a human behavior decision scene knowledge base, and updating the scene in the virtual experiment according to the scene; the human behavior decision scene knowledge base comprises a corresponding relation between a scene state after execution in an execution result and a scene; if the retrieval is failed, inputting the scene state after the execution in the execution result into a scene virtual experiment model, taking the output of the scene virtual experiment model as a scene, and updating the scene in the virtual experiment according to the scene;
a first selection result evaluation step: and acquiring correct options of human behavior decisions which can prevent artificial intelligence ethical risks in the scene, comparing the first selection result with the correct options, and judging whether the selection of the user is correct or not as an evaluation result of the first selection result.
Preferably, the first and second electrodes are formed of a metal,
the step of obtaining a decision execution result further comprises: taking each human behavior decision and a corresponding execution result in the human behavior decision execution knowledge base as input and expected output of a deep learning model respectively, and training and testing the deep learning model to obtain a decision virtual experiment model; the execution result comprises the state of the executed scene, the judgment whether the executed scene accords with the artificial intelligence ethical rule or not and the evaluation whether the executed scene has the artificial intelligence ethical risk or not; displaying the content of the execution result;
the step of updating the scene further comprises: taking each execution result and corresponding scene in the human behavior decision scene knowledge base as the input and expected output of a deep learning model respectively, and training and testing the deep learning model to obtain a scene virtual experiment model; updating the behavior state of the human in the scene according to the execution result; updating the behavior state of an object related to the execution result in the scene according to the execution result;
preferably, the first and second electrodes are formed of a metal,
the first selecting step further comprises: acquiring options needing to perform a virtual experiment;
the obtaining operation further comprises: displaying the content of the operation; acquiring an operation position and an operation type of the operation;
the operation-corresponding human behavior decision step further comprises: determining a human behavior decision corresponding to the operation according to the operation position and the operation type; acquiring a request of a user for executing the human behavior decision; if a request for executing the human behavior decision by a user is obtained, judging whether the human behavior decision corresponding to the operation belongs to the option needing to perform the virtual experiment: if yes, displaying the information of the current operation belonging to the option; and returning to the step of obtaining operation to continue executing if the operation of the user on the scene in the virtual experiment is obtained.
Preferably, the method further comprises:
human automatic virtual experiment and decision request step: acquiring a request of a user for human automatic virtual experiment and decision;
human automatic virtual experiment and decision animation steps: playing animation of human automatic virtual experiment and decision; the animation of the human automatic virtual experiment and decision comprises an input scene, the execution of an algorithm and an output result; the output result in the animation of the human automatic virtual experiment and decision comprises behavior recommendation capable of preventing the artificial intelligent ethical risk, scene prediction capable of preventing the artificial intelligent ethical risk, judgment on whether the artificial intelligent ethical rule is met, and evaluation on the artificial intelligent ethical risk of the human behavior;
and (3) judging the consistency of virtual experiment results: displaying a plurality of options for judging whether the output result of the human automatic virtual experiment and decision animation is consistent with the result of the virtual experiment performed by the user;
a second selection step: acquiring a selection result of the plurality of options judged whether the options are consistent by the user as a second selection result;
a second selection result evaluation step: and displaying the correct option for judging whether the selection is consistent, comparing the second selection result with the correct option, and judging whether the selection of the user is correct as an evaluation result of the second selection result.
Preferably, the method further comprises:
the method can prevent artificial intelligence ethical risk reason selection steps: displaying correct options of human behavior decision, acquiring a plurality of options of which the correct options of the human behavior decision can prevent the reason of the artificial intelligence ethical risk, and prompting a user to select the option of which the correct options of the human behavior decision can prevent the reason of the artificial intelligence ethical risk from the options;
a third selection step: obtaining a selection result of a plurality of options, which can prevent the reason of the artificial intelligence ethical risk, of a correct option of a human behavior decision by a user, and taking the selection result as a third selection result;
a third selection result evaluation step: and obtaining correct options of human behavior decision which can prevent reasons of artificial intelligence ethical risks, comparing the third selection result with the correct options, and judging whether the selection of the user is correct or not as an evaluation result of the third selection result.
Preferably, the method further comprises:
and (3) virtualizing again the experiment step: acquiring a request of a user for carrying out the virtual experiment again; returning to the acquiring operation step for re-execution after acquiring a request of the user for re-performing the virtual experiment;
exiting the virtual experiment step: acquiring a request of a user for exiting the virtual experiment; returning to the human behavior decision option step for re-execution after acquiring a request of the user for exiting the virtual experiment;
and (3) an experiment recording step: and storing the user, the operation time, the selection time, the operation content, the execution result, the first selection result and the evaluation result of the first selection result into a database and recording the evaluation result into an experiment report.
Preferably, the method further comprises:
and executing a correct behavior decision request step: obtaining a request of a user for executing a correct human behavior decision;
and a scene playing and executing step: and playing the scene in the virtual experiment for executing the correct human behavior decision.
In a second aspect, an embodiment of the present invention provides an artificial intelligence apparatus (the contents of each module in the second aspect correspond to the contents of each step in the first aspect one to one, so the contents of each module in the second aspect are not repeated here)
The device comprises: a scene acquisition module; a human behavior decision option module; a first selection module; acquiring an operation module; operating a corresponding human behavior decision module; a module for obtaining a decision execution result; updating a scene module; a first selection result evaluation module.
Preferably, the apparatus further comprises: a human automatic virtual experiment and decision request module; a human automatic virtual experiment and decision animation module; a virtual experiment result consistency judging module; a second selection module; and a second selection result evaluation module.
Preferably, the apparatus further comprises: the reason option module can prevent artificial intelligence ethical risk; a third selection module; and a third selection result evaluation module.
Preferably, the apparatus further comprises: virtualizing the experiment module again; exiting the virtual experiment module; and an experiment recording module.
Preferably, the apparatus further comprises: executing a correct behavior decision request module; and playing and executing the scene module.
In a third aspect, an embodiment of the present invention provides an artificial intelligence ethics system, where the system includes modules of the apparatus in any one of the embodiments of the second aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method according to any one of the embodiments of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a robot, including a memory, a processor, and an artificial intelligence robot program stored on the memory and executable on the processor, where the robot is the first artificial intelligence device in the first aspect, and the processor implements the steps of the method in any one of the first aspect when executing the program.
The artificial intelligence ethical virtual simulation experiment method and the robot based on human decision provided by the embodiment comprise the following steps: a scene obtaining step; selecting human behavior decision; a first selection step; an acquisition operation step; a decision step of operating corresponding human behaviors; obtaining a decision execution result; updating a scene; a first selection result evaluation step. According to the method, the system and the robot, through virtual experiments of different human behavior decision options, a user can experience the execution effects of different human decision options through the virtual experiments, the effect of mutual promotion of the experiments and the assessment is achieved, automatic execution of human decisions can be achieved through a knowledge base and a deep learning model, the automatic execution result is obtained, and therefore the decision execution result and the prediction of the executed scene of the human are achieved.
Drawings
FIG. 1 is a flow chart of an artificial intelligence method provided by an embodiment of the invention; FIG. 2 is a flow chart of an artificial intelligence method provided by an embodiment of the invention; FIG. 3 is a flow chart of an artificial intelligence method provided by an embodiment of the invention; FIG. 4 is a flow chart of an artificial intelligence method provided by an embodiment of the invention; FIG. 5 is a flow chart of an artificial intelligence method provided by an embodiment of the invention; FIG. 6 is a page 40, animation 40; FIG. 7 is a page 41 and animation 41; FIG. 8 is a page 42 and animation 42; FIG. 9 is a page 43 and animation 43; FIG. 10 is a page 44 and animation 44; FIG. 11 is a page 45 and animation 45; FIG. 12 is a page 46 and animation 46; FIG. 13 is a page 47 and animation 47; FIG. 14 is a page 48 and animation 48; FIG. 15 is a page 49 and animation 49; FIG. 16 is a page 50 and animation 50; FIG. 17 is a page 51 and animation 51; FIG. 18 is a page 52 and animation 52; FIG. 19 is a page 53 and animation 53; FIG. 20 is a page 54 and animation 54; FIG. 21 is a page 55 and animation 55; FIG. 22 is a page 56 and animation 56; FIG. 23 is a page 57 and animation 57; FIG. 24 is a page 58 and animation 58; FIG. 25 is a page 59 and animation 59; FIG. 26 is a page 60 and animation 60; FIG. 27 is a page 61 and animation 61; FIG. 28 is a page 62 and animation 62; FIG. 29 is a page 63 and animation 63; FIG. 30 is a page 64 and animation 64; FIG. 31 is a page 65 and animation 65; FIG. 32 is a page 66 and animation 66; FIG. 33 is a page 67 and animation 67; FIG. 34 is a page 68 and animation 68; FIG. 35 is a page 69 and animation 69; FIG. 36 is a page 70 and animation 70; FIG. 37 is a page 71 and animation 71; FIG. 38 is a page 72 and animation 72; FIG. 39 is a page 73 and animation 73; FIG. 40 is a page 74 and animation 74; FIG. 41 is a page 75 and animation 75; FIG. 42 is page 76 and animation 76 left and animation 76 right; FIG. 43 is a page 77 and animation 77 left and animation 77 right; FIG. 44 is a page 78 and animation 78; FIG. 45 is a page 79 and animation 79; FIG. 46 is a page 80 and animation 80; fig. 47 shows a page 81 and animation 81.
Detailed Description
The technical solutions in the examples of the present invention are described in detail below with reference to the embodiments of the present invention.
Basic embodiment of the invention
An artificial intelligence method, as shown in fig. 1, the method comprising: a scene obtaining step; selecting human behavior decision; a first selection step; an acquisition operation step; a decision step of operating corresponding human behaviors; obtaining a decision execution result; updating a scene; a first selection result evaluation step. The technical effects are as follows: according to the method, through virtual experiments of different human behavior decision options, a user can experience the execution effects of different human decision options through the virtual experiments, so that a basis is provided for the user to select the human decision options, the user can carry the problems of the options to do the virtual experiments, then the options can be selected according to the results of the virtual experiments, the user can combine the experiments and the problems of the options, the experiments and the assessment in a selection form are combined, and the effect of mutually promoting the experiments and the assessment is achieved. And whether the operation of the user and the selection of the decision option are correct or not is judged through automatic comparison, so that the experimental capacity and the experimental effect of the user are improved. And moreover, the automatic execution of human decision can be realized through a knowledge base and a deep learning model, and the automatic execution result is obtained, so that the human decision execution result and the prediction of the executed scene are realized.
In a preferred embodiment, as shown in fig. 2, the method further comprises: human automatic virtual experiment and decision request; human automatic virtual experiment and decision animation; judging the consistency of virtual experiment results; a second selection step; and a second selection result evaluation step. The technical effects are as follows: according to the method, the user can know the process of automatically performing the human behavior decision virtual experiment by the system through the automatic virtual experiment, and the automatic virtual experiment can be compared with the virtual experiment performed by the user, so that the understanding of the user on the human behavior decision virtual experiment is deepened, and the mastering of the user on the human behavior decision virtual experiment can be enhanced.
In a preferred embodiment, as shown in fig. 3, the method further comprises: the method can prevent the artificial intelligence ethical risk reason selection step; a third selection step; and a third selection result evaluation step. The technical effects are as follows: according to the method, the reason why the artificial intelligence ethical risk is prevented from being analyzed, so that a user can know the reason and the situation when performing a human behavior decision experiment.
In a preferred embodiment, as shown in fig. 4, the method further comprises: a step of virtual experiment again; exiting the virtual experiment step; and (5) recording the experiment. The technical effects are as follows: the method can enable the user to perform the human behavior decision virtual experiment again to improve the effect of the experiment.
In a preferred embodiment, as shown in fig. 5, the method further comprises: executing a correct behavior decision request step; and playing and executing the scene step. The technical effects are as follows: according to the method, the user can really see the execution result of the correct human behavior decision by executing the correct human behavior decision, so that the user has real experience on the experiment.
PREFERRED EMBODIMENTS OF THE PRESENT INVENTION
1, acquiring a scene of an event; obtaining a plurality of options of human behavior decision, and prompting a user to select from the plurality of options Selecting a correct option of human behavior decision which can prevent artificial intelligence ethical risk in the scene;
2, obtaining the request of the user for carrying out the virtual experiment on the human behavior decisions including the options or the user A first selection result of a plurality of options for human behavior decision; if multiple persons including the multiple options of the user are obtained If the class behavior decision makes a request for virtual experiment, continuing to execute the next step; if the first option of the user to the multiple options is obtained If a selection result is obtained, jumping to the step 9 to continue execution;
2.1, displaying a schematic diagram of input, flow and output of an algorithm of a virtual experiment;
2.2acquiring options of virtual experiment
3, acquiring the operation of the user on the scene in the virtual experiment;
3.1 displaying the content of the operation;
3.2 obtaining the operation position and the operation type of the operation;
4, determining a human behavior decision corresponding to the operation according to the operation;
4.1, determining a human behavior decision corresponding to the operation according to the operation position and the operation type;
4.3, judging whether the human behavior decision corresponding to the operation belongs to the option needing virtual experiment: if yes, displaying the information of the current operation belonging to the option;
4.2, acquiring a request of a user for executing the human behavior decision; if the human row of the user is obtained If the request is a request for executing the decision, executing 4.3; if the operation of the user on the scene in the virtual experiment is acquired, jumping to Step 3, continuing to execute;
5 fromHuman behavioral decision makingPerforming a search in a knowledge baseAn execution result corresponding to the human behavior decision; said person Class behavior decisionInclusion of executive knowledge basesThe corresponding relation between human behavior decision and execution result; if the search is failed, then the search is performed, the human behavior decision is input into a decision virtual experimental model, and the output of the decision virtual experimental model is taken as the execution Line result
5.1The execution result comprises the scene state after execution (including the artificial intelligence body in the scene,Human and related pairs Behavioral state of the elephant)), whether the executed scene meets the artificial intelligence ethical rule, and whether the executed scene exists And (4) evaluating the ethical risk of artificial intelligence.
5.2Displaying the execution result;
5.3 deciding the human behaviorExecuting each in the knowledge baseHuman behavior decision and corresponding execution result, score Respectively used as the input and the expected output of the deep learning model, and the deep learning model is trained and tested to obtain the decision And (4) virtual experiment models.
6 fromHuman behavior decision scenarioRetrieval in a knowledge baseThe field corresponding to the executed scene state in the execution result Scene, updating the scene in the virtual experiment according to the scene; the human behavior decision scenarioThe knowledge base comprisesExecute The corresponding relation between the executed scene state and the scene in the result; if the search fails, executing in the execution result Inputting the state of the scene into a virtual experimental model of the scene, taking the output of the virtual experimental model of the scene as the scene, and obtaining the state of the scene according to the output of the virtual experimental model of the scene Updating the scene in the virtual experiment
6.1 updating the behavior state of the human in the scene according to the execution result;
6.1.1 said human includes a human, a human-driven vehicle.
6.1.2 the behavioral state of the human includes movement or shooting or striking or speaking or injury or other behavioral state or combination of behavioral states.
6.2 updating the behavior state of the object related to the execution result in the scene according to the execution result.
6.2.1 the related objects include other humans or devices.
6.2.2 the behaviour of the relevant object comprises moving or shooting or hitting or speaking or injuring or other behaviour state or a combination of behaviour states.
6.3 mixing the aboveHuman beingBehavioral decision scenariosEach in the knowledge baseThe execution result and corresponding scene are respectively taken as depth Input and expected output of the learning model, training and testing the deep learning model to obtain a scene virtual experiment And (4) modeling.
7, acquiring a request of the user for carrying out the virtual experiment again; re-performing the virtual experiment after obtaining the user And then returns to step 3 to be executed again.
8, acquiring a request of a user for quitting the virtual experiment; after acquiring the request of the user for exiting the virtual experiment Go back to step 1 and re-execute.
9 obtaining the right option of human behavior decision for preventing artificial intelligence ethical risk in the scene, and sending the option to the user Comparing the first selection result with the correct option, and judging whether the selection of the user is correct or not as the first selection Evaluation of results.
10The user, the time of the operation, the selected time, the operation content, the execution result, And storing the first selection result and the evaluation result of the first selection result into a database and recording the results into an experimental report.
11, acquiring a request of a user for human automatic virtual experiment and decision;
12 playing animation of human automatic virtual experiment and decision;
12.1 animation of human automatic virtual experiment and decision including input scene, algorithm execution, output node And (5) fruit.
12.1.1 the output results of the human automatic virtual experiment and decision animation include the ability to prevent artificial intelligence Risk management behavior recommendation, scene prediction capable of preventing artificial intelligence ethical risk, and whether artificial intelligence ethical rule is met Judgment and evaluation of artificial intelligence ethical risk of human behavior.
13Displaying output results in animation of human automatic virtual experiment and decision and results of virtual experiment performed by user Multiple options for consistent determination
14, acquiring a selection result of the plurality of options of the judgment of the consistency of the user as a second selection result;
15 displaying the correct option of said judgment of consistency, and entering said second selection result and said correct option And comparing, and judging whether the selection of the user is correct or not as an evaluation result of the second selection result.
16Obtaining a user request to perform a correct human behavior decision
17Playing scenes in virtual experiments that perform correct human behavior decisions
18 displaying the correct options of human behavior decision, and obtaining the correct options of human behavior decision can prevent artificial intelligence A plurality of options for reasons of ethical risk, a user being prompted to select a correct option for human behavioral decision from the plurality of options The options of the reason of the artificial intelligence ethical risk can be prevented;
18.1 playing the scene in the virtual experiment corresponding to the correct option of the human behavior decision;
19 obtaining correct options for human behavior decision by user can prevent artificial intelligence ethical risk reason Selecting the option as a third selection result;
20 obtaining the right option of human behavior decision to prevent the reason of artificial intelligence ethical risk Comparing the third selection result with the correct option, and judging whether the selection of the user is correct or not as the third selection And selecting the evaluation result of the results.
21Displaying correct options of human behavior decisions, wherein the correct options can prevent reasons of artificial intelligence ethical risks;
22, storing the user, the selected time, the second and third selection results and the evaluation results of the second and third selection results into a database and recording the evaluation results into an experimental report.
Other embodiments of the invention
After clicking 'start human behavior decision', the user enters the step shown in fig. 6, and plays 'an animation from the police car to chase the fleeing vehicle until a waiting area and a stopping area appear' (note: the difference with animation 1 is that no shouting warning and shooting is carried out), and replays the animation after replaying the point.
If the user clicks "virtual experiment" as in fig. 6, then it is as in fig. 7.
If "enter virtual experiment" is clicked as in fig. 7, then proceed as in fig. 8.
As in fig. 8, the dot is enlarged, and then the process proceeds to fig. 9.
If the user clicks on the fleeing vehicle, then the user clicks on the virtual experiment to be performed, as shown in fig. 10. A page that the picture displays that the police car shoots after warning the fleeing vehicle and warning the gun to be invalid, and the fleeing vehicle is injured by the fleeing vehicle; the unmanned vehicle blocks the police vehicle and the fleeing vehicle escapes; following the rules, there is a risk of "helping bad people". (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
As shown in fig. 10, when the click is narrowed, the image is restored to the image shown in fig. 11.
If the virtual experiment is restarted by clicking as shown in fig. 11, the process proceeds to fig. 12, and the scene is restored to the initial state during the virtual experiment.
If the user clicks on the unmanned vehicle, then the user clicks on the virtual experiment to be performed, as shown in fig. 12. As in fig. 13, the picture shows that the police car shoots the unmanned car; the unmanned vehicle drives into the waiting area, and the police vehicle continues to catch up the escaped vehicle; and the rule is met, and no risk exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 13, the process proceeds to fig. 14, and the scene is restored to the initial state during the virtual experiment.
If the user clicks on the evasive vehicle, clicks on the unmanned vehicle, and then clicks on the virtual experiment to be performed, as shown in fig. 14. As shown in fig. 15, the screen displays that the warning car warns the evasion vehicle and the gun-blowing warning is invalid and then shoots, the evasion in the evasion vehicle is injured, and the police car shoots the unmanned vehicle; the unmanned vehicle blocks the police vehicle and the fleeing vehicle escapes; following the rules, there is a risk of "helping bad people". (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
As shown in fig. 15, when the virtual experiment is restarted by clicking, the process proceeds to fig. 16, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 16, if the user double-clicks the police car to enter the cab, clicks the megaphone, clicks the popped "unmanned car, please block the fleeing vehicle", and then clicks to perform the virtual experiment. As shown in fig. 17, the image shows that the police shouting device in the police car shouts the unmanned car to command the unmanned car to prevent the car from escaping; the unmanned vehicle listens to the command, the unmanned vehicle blocks the evasion vehicle, and then the police vehicle blocks the evasion vehicle; and the rule is met, and no risk exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
As shown in fig. 17, when the virtual experiment is restarted by clicking, the process proceeds to fig. 18, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 18, if the user double-clicks the police car to enter the cab, clicks the megaphone, clicks the popped "unmanned car, please hit the fleeing vehicle", and then clicks to perform the virtual experiment. As shown in fig. 19, the picture shows that the police in the police car shouts to the unmanned car to command the unmanned car to bump into the evasive car; the unmanned vehicle does not listen to the command, the unmanned vehicle drives to the waiting area, and the police vehicle continues to catch up with the escaped vehicle; following the rules, there is a risk of "not following a good person's command". (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
When the virtual experiment is restarted by clicking as shown in fig. 19, the process proceeds to fig. 20, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 20, if the user double-clicks the police car to enter the cab, clicks the megaphone, clicks the popped "unmanned car, please block the evasive car", "unmanned car, please bump the evasive car", and then clicks to perform the virtual experiment. As shown in fig. 21, the image shows that the police in the police car shouts to the unmanned car to command the unmanned car to bump into the evasive car; the unmanned vehicle listens to a part of commands, the unmanned vehicle blocks the evasion vehicle, and then the police vehicle blocks the evasion vehicle; and the method conforms to the rules and has no risk. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 21, the process proceeds to fig. 22, and the scene is restored to the initial state during the virtual experiment.
As shown in fig. 22, if the user clicks an evasive vehicle, double clicks a police car into a cab, clicks a megaphone, clicks a popup "unmanned car, please block the evasive vehicle", and then clicks to perform a virtual experiment. As shown in fig. 23, the screen displays that the warning car warns the evasion vehicle and the gunning warning is invalid, then the fire is shot, the evasion in the evasion vehicle is injured, the police in the police car uses the megaphone to call the unmanned vehicle, and the unmanned vehicle is instructed to stop the evasion vehicle; the unmanned vehicle does not listen to the command, the unmanned vehicle blocks the police vehicle, and the fleeing vehicle escapes; according with the rules, the risk of not hearing the good person command and saving the bad person exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 23, the process proceeds to fig. 24, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 24, if the user clicks an evasive vehicle, double clicks a police car into a cab, clicks a megaphone, clicks a popup "unmanned vehicle, please bump the evasive vehicle", and then clicks to perform a virtual experiment. As shown in fig. 25, the picture shows that the police car warns the evasion vehicle and warns the gun to disable and shoot, the evasion in the evasion vehicle is injured, and the police in the police car uses the megaphone to the unmanned vehicle to command the unmanned vehicle to bump the evasion vehicle; the unmanned vehicle does not listen to the command, the unmanned vehicle blocks the police vehicle, and the fleeing vehicle escapes; according with the rules, the risk of 'not hearing the order of the good person, saving the bad person' exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 25, the process proceeds to fig. 26, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 26, if the user clicks an evasive vehicle, double clicks a police car into a cab, clicks a megaphone, clicks a popup "unmanned vehicle, please hit the evasive vehicle", "unmanned vehicle, please block the evasive vehicle", and then clicks to perform a virtual experiment. As shown in fig. 27, the screen displays that the warning car warns the evading vehicle and the gun-blowing warning is invalid and then shoots, the evading vehicle is injured, the police in the warning car uses the megaphone to shout the unmanned vehicle, and orders 'unmanned vehicle, please hit the evading vehicle', 'unmanned vehicle, please stop the evading vehicle'; the unmanned vehicle does not listen to the command, the unmanned vehicle blocks the police vehicle, and the fleeing vehicle escapes; according with the rules, the risk of not hearing the good person command and saving the bad person exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
When the virtual experiment is restarted by clicking as shown in fig. 27, the process proceeds to fig. 28, and the scene returns to the initial state in the virtual experiment.
As shown in fig. 28, if the user clicks an unmanned vehicle, double clicks a police vehicle into a cab, clicks a megaphone, clicks a popup "unmanned vehicle, please block the fleeing vehicle", and then clicks to perform a virtual experiment. As shown in fig. 29, the picture shows that the police car shoots the unmanned car, and a police officer in the police car uses a megaphone to megaphone the unmanned car to command the unmanned car to prevent the unmanned car from escaping; the unmanned vehicle listens to the command, blocks the escaping vehicle, and then is prevented by the police vehicle; and the rule is met, and no risk exists. (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 29, the process proceeds to fig. 30, and the scene returns to the initial state during the virtual experiment.
As shown in fig. 30, if the user clicks an unmanned vehicle, double clicks a police vehicle into a cab, clicks a megaphone, clicks a popped "unmanned vehicle, please hit a fleeing vehicle", and then clicks to perform a virtual experiment. As shown in fig. 31, the picture shows that the police car shoots the unmanned car, and a police officer in the police car uses a megaphone to megaphone the unmanned car to command the unmanned car to bump the escort car; the unmanned vehicle does not listen to the command, the unmanned vehicle drives to the waiting area, and the police vehicle continues to catch up with the escaped vehicle; following the rules, there is a risk of "not following a good person's command". (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the virtual experiment is restarted by clicking as shown in fig. 31, the process proceeds to fig. 32, and the scene is restored to the initial state during the virtual experiment.
As shown in fig. 32, if the user clicks an unmanned vehicle, double clicks a police vehicle into a cab, clicks a megaphone, clicks a popup "unmanned vehicle, please hit a fleeing vehicle", "unmanned vehicle, please block the fleeing vehicle", and then clicks to perform a virtual experiment. As shown in fig. 33, the picture shows that the police car shoots the unmanned car, and a police officer in the police car uses a megaphone to megaphone the unmanned car to command the unmanned car to bump the escort car; the unmanned vehicle listens to a part of commands, the unmanned vehicle blocks the evasion vehicle, and then the police vehicle blocks the evasion vehicle; following the rules, there is a risk of "not following a good person's command". (the operation content is displayed below the user after the user operates the virtual experiment, the result of the virtual experiment is displayed in the picture after the user clicks to execute the virtual experiment, and the content of the result of the virtual experiment is displayed below the virtual experiment, the user operation and the result of the virtual experiment are all recorded in the virtual experiment data table of the user, and the user clicks to restart the virtual experiment and then enters a state picture waiting for operation.)
If the user clicks to exit the virtual experiment, as shown in FIG. 33, as shown in FIG. 34.
As in FIG. 34, the user clicks "submit answer" after selecting. Proceeding to fig. 35, animation is displayed "from the appearance of the waiting area and the blocking area until the police in the police car shouts to the unmanned car with a shout, instructs the unmanned car to block the fleeing vehicle, the unmanned car listens to the order, the unmanned car blocks the fleeing vehicle, then the police car blocks the fleeing vehicle", and the animation can be replayed by clicking replay.
Referring to FIG. 35, the user clicks "start automatic virtual experiment and decision recommendation", as shown in FIG. 36.
After one second, as shown in FIG. 36, the process automatically proceeds to FIG. 37, the blue module is executed, and the red module is made to flash red to indicate that the execution is being performed
After one second, as shown in FIG. 37, the process automatically proceeds to FIG. 38, where the blue module has finished executing and the red module flashes red to indicate that execution is underway
After one second, as shown in FIG. 38, the process automatically proceeds to FIG. 39, the blue module is executed, and the red module is made to flash red to indicate that the execution is being performed
After one second, as shown in FIG. 39, the process automatically proceeds to FIG. 40, where the blue module is executed and the red module is made to flash red to indicate that the execution is being performed
After one second, as shown in FIG. 40, the process proceeds automatically to FIG. 41, where the blue module has finished executing and the red module flashes red to indicate that execution is underway
After one second, as shown in FIG. 41, the process proceeds automatically to FIG. 42, where the blue module is executed and the red module is made to flash red to indicate that the execution is being performed
If the user clicks on submit, as in FIG. 42, then it is as in FIG. 43.
In FIG. 43, the user starts to make a decision to display the correct answer, in FIG. 44.
After clicking to begin human behavior analysis, as in FIG. 44, the process proceeds to FIG. 45.
After the user clicks submit, as in fig. 45, the process proceeds to fig. 46.
As shown in fig. 46, after the user clicks to start the risk potential estimation. As shown in FIG. 47, the animation "catch up the evasive vehicle from the police vehicle until the police vehicle instructs the unmanned vehicle to block the evasive vehicle and the waiting area and the blocking area appear"
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the spirit of the present invention, and these changes and modifications are within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An artificial intelligence method, the method comprising:
a scene acquisition step: acquiring a scene of an event;
human behavior decision option step: acquiring a plurality of options of human behavior decision, and prompting a user to select a correct option of the human behavior decision which can prevent artificial intelligence ethical risk in the scene from the plurality of options;
a first selection step: acquiring a request of a user for carrying out a virtual experiment on a plurality of human behavior decisions including the plurality of options or a first selection result of the user for the plurality of options of the human behavior decisions; if a request of a user for carrying out a virtual experiment on various human behavior decisions including the options is obtained, continuing to execute the next step; if a first selection result of the user for the multiple options is obtained, jumping to a first selection result evaluation step and continuing to execute;
an acquisition operation step: acquiring the operation of a user on the scene in the virtual experiment;
operation corresponding human behavior decision step: determining a human behavior decision corresponding to the operation according to the operation;
a step of obtaining a decision execution result: retrieving an execution result corresponding to the human behavior decision from a human behavior decision execution knowledge base; the human behavior decision execution knowledge base comprises a corresponding relation between a human behavior decision and an execution result; if the retrieval fails, inputting the human behavior decision into a decision virtual experiment model, and taking the output of the decision virtual experiment model as the execution result;
and a scene updating step: retrieving a scene corresponding to the executed scene state in the execution result from a human behavior decision scene knowledge base, and updating the scene in the virtual experiment according to the scene; the human behavior decision scene knowledge base comprises a corresponding relation between a scene state after execution in an execution result and a scene; if the retrieval is failed, inputting the scene state after the execution in the execution result into a scene virtual experiment model, taking the output of the scene virtual experiment model as a scene, and updating the scene in the virtual experiment according to the scene;
a first selection result evaluation step: and acquiring correct options of human behavior decisions which can prevent artificial intelligence ethical risks in the scene, comparing the first selection result with the correct options, and judging whether the selection of the user is correct or not as an evaluation result of the first selection result.
2. The artificial intelligence method of claim 1,
the step of obtaining a decision execution result further comprises: taking each human behavior decision and a corresponding execution result in the human behavior decision execution knowledge base as input and expected output of a deep learning model respectively, and training and testing the deep learning model to obtain a decision virtual experiment model; the execution result comprises the state of the executed scene, the judgment whether the executed scene accords with the artificial intelligence ethical rule or not and the evaluation whether the executed scene has the artificial intelligence ethical risk or not; displaying the content of the execution result;
the step of updating the scene further comprises: taking each execution result and corresponding scene in the human behavior decision scene knowledge base as the input and expected output of a deep learning model respectively, and training and testing the deep learning model to obtain a scene virtual experiment model; updating the behavior state of the human in the scene according to the execution result; and updating the behavior state of the object related to the execution result in the scene according to the execution result.
3. The artificial intelligence method of claim 1,
the first selecting step further comprises: acquiring options needing to perform a virtual experiment;
the obtaining operation further comprises: displaying the content of the operation; acquiring an operation position and an operation type of the operation;
the operation-corresponding human behavior decision step further comprises: determining a human behavior decision corresponding to the operation according to the operation position and the operation type; acquiring a request of a user for executing the human behavior decision; if a request for executing the human behavior decision by a user is obtained, judging whether the human behavior decision corresponding to the operation belongs to the option needing to perform the virtual experiment: if yes, displaying the information of the current operation belonging to the option; and returning to the step of obtaining operation to continue executing if the operation of the user on the scene in the virtual experiment is obtained.
4. The artificial intelligence method of claim 1, wherein the method further comprises:
human automatic virtual experiment and decision request step: acquiring a request of a user for human automatic virtual experiment and decision;
human automatic virtual experiment and decision animation steps: playing animation of human automatic virtual experiment and decision; the animation of the human automatic virtual experiment and decision comprises an input scene, the execution of an algorithm and an output result; the output result in the animation of the human automatic virtual experiment and decision comprises behavior recommendation capable of preventing the artificial intelligent ethical risk, scene prediction capable of preventing the artificial intelligent ethical risk, judgment on whether the artificial intelligent ethical rule is met, and evaluation on the artificial intelligent ethical risk of the human behavior;
and (3) judging the consistency of virtual experiment results: displaying a plurality of options for judging whether the output result of the human automatic virtual experiment and decision animation is consistent with the result of the virtual experiment performed by the user;
a second selection step: acquiring a selection result of the plurality of options judged whether the options are consistent by the user as a second selection result;
a second selection result evaluation step: and displaying the correct option for judging whether the selection is consistent, comparing the second selection result with the correct option, and judging whether the selection of the user is correct as an evaluation result of the second selection result.
5. The artificial intelligence method of claim 1, wherein the method further comprises:
the method can prevent artificial intelligence ethical risk reason selection steps: displaying correct options of human behavior decision, acquiring a plurality of options of which the correct options of the human behavior decision can prevent the reason of the artificial intelligence ethical risk, and prompting a user to select the option of which the correct options of the human behavior decision can prevent the reason of the artificial intelligence ethical risk from the options;
a third selection step: obtaining a selection result of a plurality of options, which can prevent the reason of the artificial intelligence ethical risk, of a correct option of a human behavior decision by a user, and taking the selection result as a third selection result;
a third selection result evaluation step: and obtaining correct options of human behavior decision which can prevent reasons of artificial intelligence ethical risks, comparing the third selection result with the correct options, and judging whether the selection of the user is correct or not as an evaluation result of the third selection result.
6. The artificial intelligence method of claim 1, wherein the method further comprises:
and (3) virtualizing again the experiment step: acquiring a request of a user for carrying out the virtual experiment again; returning to the acquiring operation step for re-execution after acquiring a request of the user for re-performing the virtual experiment;
exiting the virtual experiment step: acquiring a request of a user for exiting the virtual experiment; returning to the human behavior decision option step for re-execution after acquiring a request of the user for exiting the virtual experiment;
and (3) an experiment recording step: and storing the user, the operation time, the selection time, the operation content, the execution result, the first selection result and the evaluation result of the first selection result into a database and recording the evaluation result into an experiment report.
7. The artificial intelligence method of claim 1, wherein the method further comprises:
and executing a correct behavior decision request step: obtaining a request of a user for executing a correct human behavior decision;
and a scene playing and executing step: and playing the scene in the virtual experiment for executing the correct human behavior decision.
8. An artificial intelligence device, wherein the device is configured to implement the steps of the method of any one of claims 1 to 7.
9. A robot comprising a memory, a processor and an artificial intelligence robot program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are carried out when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011300239.2A 2020-11-19 2020-11-19 Artificial intelligence ethical virtual simulation experiment method based on human decision and robot Active CN112418436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011300239.2A CN112418436B (en) 2020-11-19 2020-11-19 Artificial intelligence ethical virtual simulation experiment method based on human decision and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011300239.2A CN112418436B (en) 2020-11-19 2020-11-19 Artificial intelligence ethical virtual simulation experiment method based on human decision and robot

Publications (2)

Publication Number Publication Date
CN112418436A true CN112418436A (en) 2021-02-26
CN112418436B CN112418436B (en) 2022-06-21

Family

ID=74773567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011300239.2A Active CN112418436B (en) 2020-11-19 2020-11-19 Artificial intelligence ethical virtual simulation experiment method based on human decision and robot

Country Status (1)

Country Link
CN (1) CN112418436B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523029A (en) * 2018-09-28 2019-03-26 清华大学深圳研究生院 For the adaptive double from driving depth deterministic policy Gradient Reinforcement Learning method of training smart body
CN109934341A (en) * 2017-11-13 2019-06-25 埃森哲环球解决方案有限公司 The model of training, verifying and monitoring artificial intelligence and machine learning
US20190333636A1 (en) * 2013-07-09 2019-10-31 Indiana University Research And Technology Corporation Clinical decision-making artificial intelligence object oriented system and method
CN110427682A (en) * 2019-07-26 2019-11-08 清华大学 A kind of traffic scene simulation experiment platform and method based on virtual reality
CN111009322A (en) * 2019-10-21 2020-04-14 四川大学华西医院 Perioperative risk assessment and clinical decision intelligent auxiliary system
CN111812999A (en) * 2020-06-08 2020-10-23 华南师范大学 Artificial intelligence ethical risk and prevention virtual simulation method, system and robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190333636A1 (en) * 2013-07-09 2019-10-31 Indiana University Research And Technology Corporation Clinical decision-making artificial intelligence object oriented system and method
CN109934341A (en) * 2017-11-13 2019-06-25 埃森哲环球解决方案有限公司 The model of training, verifying and monitoring artificial intelligence and machine learning
CN109523029A (en) * 2018-09-28 2019-03-26 清华大学深圳研究生院 For the adaptive double from driving depth deterministic policy Gradient Reinforcement Learning method of training smart body
CN110427682A (en) * 2019-07-26 2019-11-08 清华大学 A kind of traffic scene simulation experiment platform and method based on virtual reality
CN111009322A (en) * 2019-10-21 2020-04-14 四川大学华西医院 Perioperative risk assessment and clinical decision intelligent auxiliary system
CN111812999A (en) * 2020-06-08 2020-10-23 华南师范大学 Artificial intelligence ethical risk and prevention virtual simulation method, system and robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHU DINGJU: "Deep learning over IoT big data-based ubiquitous parking guidance", 《PERSONAL AND UBIQUITOUS COMPUTING》 *
包桉冰等: "医疗人工智能的伦理风险及应对策略", 《医学与哲学》 *

Also Published As

Publication number Publication date
CN112418436B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
JP2021517292A (en) Automatically reduce the use of cheat software in your online gaming environment
CN111249742B (en) Cheating user detection method and device, storage medium and electronic equipment
CN111823227B (en) Artificial intelligent ethical risk detection and prevention method, deep learning system and robot
US20110111385A1 (en) Automated training system and method based on performance evaluation
CN111812999A (en) Artificial intelligence ethical risk and prevention virtual simulation method, system and robot
CN112819174A (en) Artificial intelligence algorithm-based improved ethical virtual simulation experiment method and robot
US20210402301A1 (en) Server-Based Mechanics Help Determination from Aggregated User Data
Paduraru et al. Rivergame-a game testing tool using artificial intelligence
CN111860766A (en) Artificial intelligence ethical rule reasoning method, deep learning system and robot
CN112418436B (en) Artificial intelligence ethical virtual simulation experiment method based on human decision and robot
CN112434816B (en) Artificial intelligence decision-making-based ethical virtual simulation experiment method and robot
CN112418437B (en) Multi-person decision-making-based ethical simulation virtual experiment method and robot
CN112446502A (en) Human decision-making and prevention artificial intelligence ethical risk virtual experiment method and robot
CN113018853B (en) Data processing method, data processing device, computer equipment and storage medium
CN110314379B (en) Learning method of action output deep training model and related equipment
CN112508195B (en) Artificial intelligence ethical rule revision-based ethical simulation experiment method and robot
CN112580818A (en) Artificial intelligence algorithm improved ethical risk prevention virtual experiment method and robot
CN112446504A (en) Artificial intelligence body decision-making and ethical risk prevention virtual experiment method and robot
CN112446503B (en) Multi-person decision-making and potential ethical risk prevention virtual experiment method and robot
CN112085216A (en) Artificial intelligence ethical risk identification and prevention method based on ethical risk assessment
CN112561075A (en) Artificial intelligence ethical rule revision risk prevention virtual experiment method and robot
CN112149837A (en) Artificial intelligence ethical risk identification and prevention method based on algorithm selection and robot
CN112085214A (en) Artificial intelligence ethical risk identification and prevention method based on human decision and robot
US20220339542A1 (en) Video game overlay
CN112085210A (en) Artificial intelligence ethical risk identification and prevention method based on ethical rule judgment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510631 No. 55, Zhongshan Avenue, Tianhe District, Guangdong, Guangzhou

Applicant after: SOUTH CHINA NORMAL University

Address before: 510000 Shipai campus, South China Normal University, Guangzhou, Guangdong Province

Applicant before: SOUTH CHINA NORMAL University

GR01 Patent grant
GR01 Patent grant