CN108133259A - The system and method that artificial virtual life is interacted with the external world - Google Patents
The system and method that artificial virtual life is interacted with the external world Download PDFInfo
- Publication number
- CN108133259A CN108133259A CN201711337403.5A CN201711337403A CN108133259A CN 108133259 A CN108133259 A CN 108133259A CN 201711337403 A CN201711337403 A CN 201711337403A CN 108133259 A CN108133259 A CN 108133259A
- Authority
- CN
- China
- Prior art keywords
- virtual life
- artificial virtual
- decision
- action
- artificial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/008—Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides the system and method that a kind of artificial virtual life is interacted with the external world, wherein, system includes, cognition module, for storing the cognitive information of artificial virtual life;Sensing module, for perceiving the external information under current scene;Decision-making module, for according to external information and cognitive information, determining the interactive decision making for current scene, action module, it for the determining and matched action sequence of interactive decision making, and is acted according to action sequence, realizes the interaction of artificial virtual life and external environment.The system and method that artificial virtual life provided by the invention is interacted with the external world, it is intended to so that artificial virtual life, it can perceive extraneous information as lived life entity and corresponding interactive action provided to current scene, i.e., freely perceived and interacted with the external world.
Description
Technical field
The system interacted the present invention relates to field of computer technology more particularly to a kind of artificial virtual life with the external world and side
Method.
Background technology
Current robot product can be divided into following a few classes from form:
1. without specific form, if the small ice of Microsoft is lodged in wechat, mainly interacted by word with people.
2. cannot be moved without limbs, such as Amazon echo speakers, interacted by voice with people, meet user listen to music,
The demands such as news, shopping.
3. having, limbs are irremovable, and such as industrial machinery arm, corresponding pile line operation is carried out by receiving control command.
4. removable, such as sweeping robot, interact by physical button, APP, voice etc. with user, by camera and
External environment interacts, and completes specific operation.
To sum up, the object that current robot mainly interacts is people, and interaction means it is single, mainly with word, voice,
Physical button form is interacted by wired or wireless control terminal, although in addition, also having partial robotic to be obtained by camera
External information is taken, but is also only the complementary means that itself task is completed using camera as robot, that is to say, that is existing
Robot product, interactive means are single, and robot solely can only passively perform task, it is impossible to as the mankind and other life
Body is the same, is freely perceived and is interacted with the external world.
Invention content
The technical problem to be solved in the present invention is to provide a kind of artificial virtual life and the extraneous system and method interacted, with
Robot of the prior art is overcome solely can only passively to perform task, it is impossible to as the mankind and other life entities, from
It is perceived and is interacted with the external world by ground.
In order to solve the above technical problems, technical solution provided by the invention is:
On the one hand, the present invention provides the system that a kind of artificial virtual life is interacted with the external world, including,
Cognition module, for storing the cognitive information of artificial virtual life;
Sensing module, for perceiving the external information under current scene;
Decision-making module, for according to external information and cognitive information, determining the interactive decision making for current scene,
Action module for the determining and matched action sequence of interactive decision making, and is acted according to action sequence, is realized
The interaction of artificial virtual life and external environment.
Further, sensing module includes, visual unit, units,auditory, haptic unit, sense of taste unit, smell unit, empty
Between perceive unit;Wherein,
The hardware foundation of visual unit, units,auditory, haptic unit, sense of taste unit and smell unit is respectively to image
Head, microphone, touch sensor, taste sensor, olfactory sensor;
The hardware foundation of spatial perception unit be GPS, gyroscope.
Further, action sequence includes, facial expression, limb action, behavior act, sets oneself state, outwardly
The instruction for controlling other objects is sent, makes voice response.
Further, artificial virtual life includes, and the artificial virtual life of machine form, virtual form are deposited existing for reality
Artificial virtual life.
Further, decision-making module is specifically used for,
It is determined according to external information and cognitive information, and according to the decision rule or decision model that build in advance for current
The interactive decision making of scene;Wherein,
Decision rule is to be established according to the functional requirement of artificial virtual life, the rule from concrete scene to interactive decision making
Then information bank;
Decision model is, using the perception information to concrete scene and decision information corresponding with perception information as input,
The artificial intelligence model that training obtains.
Further, action module is specifically used for, and is determined according to the mapping relations built in advance matched with interactive decision making
Action sequence, and acted according to action sequence, realize the interaction of artificial virtual life and external environment.
Further, the building mode of mapping relations is,
Using interactive decision making and action sequence corresponding with interactive decision making as artificial intelligence model input, by constantly
Training and enhancing study, build the mapping relations between interactive decision making and action sequence;
And/or with reference to multidisciplinary expert system, the mapping relations by interactive decision making to action sequence are established by hand;
And/or using semi-supervised mode, based on the rule established by a small amount of expert, artificial intelligence model is corrected
The mapping of foundation, to build the mapping relations between interactive decision making and action sequence.
On the other hand, the present invention also provides a kind of artificial virtual life and the extraneous method interacted, including,
Step is recognized, cognition module stores the cognitive information of artificial virtual life;
Step is perceived, sensing module perceives the external information under current scene;
Steps in decision-making, decision-making module determine the interactive decision making for current scene according to external information and cognitive information,
Action step, action module determine with the matched action sequence of interactive decision making, and acted according to action sequence,
Realize the interaction of artificial virtual life and external environment.
Further, steps in decision-making specifically includes, according to external information and cognitive information, and according to the decision built in advance
Rule or decision model determine the interactive decision making for current scene;
Action step specifically includes, according to the mapping relations built in advance determine with the matched action sequence of interactive decision making,
And acted according to action sequence, realize the interaction of artificial virtual life and external environment.
Artificial virtual life provided by the invention and the extraneous system and method interacted, artificial virtual life as the mankind or its
His life entity is the same, has cognition module, sensing module, decision-making module and action module, wherein, cognition module, for storing
The cognitive information of artificial virtual life;Sensing module, for perceiving the external information under current scene;Decision-making module, for root
According to external information and cognitive information, the interactive decision making for current scene, action module, for determining and interactive decision making are determined
The action sequence matched, and acted according to action sequence, realize the interaction of artificial virtual life and external environment.Namely
It says, the artificial virtual life in the present invention, extraneous information can be perceived and to current scene as lived life entity
Corresponding interactive action is provided, i.e., is freely perceived and is interacted with the external world.
Description of the drawings
Fig. 1 is the block diagram of system that artificial virtual life provided in an embodiment of the present invention is interacted with the external world;
Fig. 2 is the flow chart of method that artificial virtual life provided in an embodiment of the present invention is interacted with the external world.
Specific embodiment
It is further illustrated the present invention below by specific embodiment, it should be understood, however, that, these embodiments are only
It is used for specifically describing in more detail, and is not to be construed as limiting the present invention in any form.
Embodiment one
With reference to Fig. 1, artificial virtual life provided in this embodiment and the extraneous system interacted, including,
Cognition module 1, for storing the cognitive information of artificial virtual life;
Sensing module 2, for perceiving the external information under current scene;
Decision-making module 3, for according to external information and cognitive information, determining the interactive decision making for current scene,
Action module 4 for the determining and matched action sequence of interactive decision making, and is acted according to action sequence, real
The now interaction of artificial virtual life and external environment.
Artificial virtual life provided in an embodiment of the present invention and the extraneous system interacted, artificial virtual life as the mankind or its
His life entity is the same, including but not limited to, cognition module 1, sensing module 2, decision-making module 3 and action module 4, wherein, cognition
Module 1, for storing the cognitive information of artificial virtual life;Sensing module 2, for perceiving the external information under current scene;
Decision-making module 3, for according to external information and cognitive information, determining the interactive decision making for current scene, action module 4 is used
It in the determining and matched action sequence of interactive decision making, and is acted according to action sequence, realizes artificial virtual life and the external world
The interaction of environment.That is, the artificial virtual life in the present invention, can perceive extraneous as lived life entity
Information and corresponding interactive action is provided to current scene, i.e., freely perceived and interacted with the external world.
It should be noted that in the present embodiment, the interactive object of artificial virtual life is the things except " itself " or life
Body is ordered, is specifically included, people, another artificial virtual life, other life entities, another intelligent life product etc..
Preferably, sensing module 2 includes, visual unit, units,auditory, haptic unit, sense of taste unit, smell unit, empty
Between perceive unit;Wherein,
The hardware foundation of visual unit, units,auditory, haptic unit, sense of taste unit and smell unit is respectively to image
Head, microphone, touch sensor, taste sensor, olfactory sensor;
The hardware foundation of spatial perception unit be GPS, gyroscope.
In the present embodiment, the concrete condition of the sensory perceptual system of reference man and artificial virtual life divides sensing module 2
For:Visual unit, units,auditory, haptic unit, sense of taste unit, smell unit and spatial perception unit.First five module and people
The corresponding perceptional function of class corresponds to, and is hardware foundation dependent on various cameras, microphone and other sensors;Spatial perception list
Member is that artificial virtual life is exclusive, using electronic equipments such as GPS, gyroscopes as hardware foundation.It should be noted that the present embodiment
In, the camera addressed should not merely be understood as being some camera, and should be the battle array of camera or camera composition
Row or other components with camera with similar functions;And the microphone addressed should not merely be understood as some wheat
Gram wind, and should be microphone or microphone array or other components with similar functions, and wherein signified microphone, no
It is only limited to generally be used for collecting the microphone that human ear can hear the sound of frequency range, can also be non-common has
The microphone of specific use, for example, can collect high frequency, low frequency, hyperfrequency, ultralow frequency sound microphone.In addition, this reality
It is only preferred embodiment to apply in example cited various types of sensors, and non-specific restriction, can combine actual demand into
Row is selected.
Specifically, interactive decision making includes but not limited to, and determines conversation object, determines to consult information, determines mobile object, certainly
Determine searching object.
It is further preferred that action sequence includes but not limited to, facial expression, limb action, behavior act sets itself
State outwardly sends the instruction for controlling other objects, makes voice response.
In the present embodiment, in order to complete the content of interactive decision making, artificial virtual life needs are acted accordingly, these
Action includes all activities that artificial virtual life carries out during being interacted with the external world, and specifically, including but not limited to:
Make facial expression;Limb action is made, for example, raising one's hand, waving, shaking the head;Behavior act is made, such as is turned left, is turned right, is preceding
Into, run;Oneself state is set, such as opens included projection to achieve the purpose that augmented reality;It is other outwardly to send control
The instruction of object;Make voice response.
It is further preferred that artificial virtual life includes, the artificial virtual life of machine form, virtual shape existing for reality
Artificial virtual life existing for state.
It should be noted that in the present embodiment, the artificial virtual life for making action can be machine shape existing for reality
The artificial virtual life or artificial virtual life existing in the form of virtual of state, for example, it may be in the form of holographic
Displaying in front of the user, and can carry out the virtual portrait of the augmented reality of corresponding actions.
Further, decision-making module 3 is specifically used for,
It is determined according to external information and cognitive information, and according to the decision rule or decision model that build in advance for current
The interactive decision making of scene;Wherein,
Decision rule is to be established according to the functional requirement of artificial virtual life, the rule from concrete scene to interactive decision making
Then information bank;
Decision model is, using the perception information to concrete scene and decision information corresponding with perception information as input,
The artificial intelligence model that training obtains.
In the present embodiment, the problem of interactive decision making is a core is determined according to current scene.Optionally, it can build in advance
Decision rule or decision model, can be according to external information and cognitive information, and according to advance in this way, in practical scene
The decision rule or decision model of structure determine the interactive decision making for current scene.Specifically, according to artificial virtual life
Functional requirement is established by the decision rule of concrete condition to interactive decision making.Decision model is artificial intelligence model, and with a large amount of right
The perception information of a certain environment interactive decision making corresponding with the scene obtains final decision model as input, training.It needs
Illustrate, decision model can be the combination of multiple second-level models, for example, it may be facial expression recognition model, object are known
The combination of one or more of other model, Natural Language Processing Models etc..
Further, action module 4 is specifically used for, and is determined according to the mapping relations built in advance matched with interactive decision making
Action sequence, and acted according to action sequence, realize the interaction of artificial virtual life and external environment.
Further, the building mode of mapping relations is,
Using interactive decision making and action sequence corresponding with interactive decision making as artificial intelligence model input, by constantly
Training and enhancing study, build the mapping relations between interactive decision making and action sequence;
And/or with reference to multidisciplinary expert system, the mapping relations by interactive decision making to action sequence are established by hand;
And/or using semi-supervised mode, based on the rule established by a small amount of expert, artificial intelligence model is corrected
The mapping of foundation, to build the mapping relations between interactive decision making and action sequence.
In the present embodiment, the action that determining artificial virtual life by interactive decision making should take is one that this patent is realized
Core point optionally, can build mapping relations in advance, in this way, in practical scene, can be closed according to the mapping built in advance
System determines and the matched action sequence of interactive decision making, and is acted according to action sequence, realizes artificial virtual life and the external world
The interaction of environment.
Specifically, the building mode of mapping relations is,
Using interactive decision making and action sequence corresponding with interactive decision making as artificial intelligence model input, by constantly
Training and enhancing study, build the mapping relations between interactive decision making and action sequence;In the present embodiment, artificial intelligence model is used
Establish mapping.It records enough people and reaches the action that a certain interactive decision making is done, the number that then will be obtained under certain conditions
According to the input as artificial intelligence model, by constantly training and enhance study, finally obtain one and under certain conditions
Reach the action done required for certain decision.
And/or with reference to multidisciplinary expert system, the mapping relations by interactive decision making to action sequence are established by hand;With reference to
Expert system is established by hand by the expert of the subjects such as sociology, psychology, behaviouristics, micro- expression psychology by decision to action
The mapping of sequence or rule.
And/or using semi-supervised mode, based on the rule established by a small amount of expert, artificial intelligence model is corrected
The mapping of foundation, to build the mapping relations between interactive decision making and action sequence.Using semi-supervised method, built with a small amount of expert
Based on vertical rule, the mapping of correction artificial intelligence model foundation is gone.
Embodiment two
With reference to Fig. 2, artificial virtual life provided in this embodiment and the extraneous method interacted, including,
Step S1 is recognized, cognition module 1 stores the cognitive information of artificial virtual life;
Step S2 is perceived, sensing module 2 perceives the external information under current scene;
Steps in decision-making S3, decision-making module 3 determine that the interaction for current scene is determined according to external information and cognitive information
Plan,
Action step S4, action module 4 determine with the matched action sequence of interactive decision making, and according to action sequence into action
Make, realize the interaction of artificial virtual life and external environment.
Artificial virtual life provided in an embodiment of the present invention and the extraneous method interacted, artificial virtual life as the mankind or its
His life entity is the same, including but not limited to, cognition module 1, sensing module 2, decision-making module 3 and action module 4, wherein, cognition
Module 1, for storing the cognitive information of artificial virtual life;Sensing module 2, for perceiving the external information under current scene;
Decision-making module 3, for according to external information and cognitive information, determining the interactive decision making for current scene, action module 4 is used
It in the determining and matched action sequence of interactive decision making, and is acted according to action sequence, realizes artificial virtual life and the external world
The interaction of environment.That is, the artificial virtual life in the present invention, can perceive extraneous as lived life entity
Information and corresponding interactive action is provided to current scene, i.e., freely perceived and interacted with the external world.
It should be noted that in the present embodiment, the interactive object of artificial virtual life is the things except " itself " or life
Body is ordered, is specifically included, people, another artificial virtual life, other life entities, another intelligent life product etc..
Preferably, sensing module 2 includes, visual unit, units,auditory, haptic unit, sense of taste unit, smell unit, empty
Between perceive unit;Wherein,
The hardware foundation of visual unit, units,auditory, haptic unit, sense of taste unit and smell unit is respectively to image
Head, microphone, touch sensor, taste sensor, olfactory sensor;
The hardware foundation of spatial perception unit be GPS, gyroscope.
In the present embodiment, the concrete condition of the sensory perceptual system of reference man and artificial virtual life divides sensing module 2
For:Visual unit, units,auditory, haptic unit, sense of taste unit, smell unit and spatial perception unit.First five module and people
The corresponding perceptional function of class corresponds to, and is hardware foundation dependent on various cameras, microphone and other sensors;Spatial perception list
Member is that artificial virtual life is exclusive, using electronic equipments such as GPS, gyroscopes as hardware foundation.It should be noted that addressed
Camera should not merely be understood as being some camera, and should be the array or other with taking the photograph of camera or camera composition
As head has the component of similar functions;And the microphone addressed should not merely be understood as some microphone, and should be
Microphone or microphone array or other components with similar functions, and wherein signified microphone, be not limited only to be
Generally use for collecting the microphone that human ear can hear the sound of frequency range, can also be non-common has specific use
Microphone, for example, can collect high frequency, low frequency, hyperfrequency, ultralow frequency sound microphone.It is in addition, listed in the present embodiment
The various types of sensors lifted are only preferred embodiment, and non-specific restriction, can be selected with reference to actual demand.
It is further preferred that interactive decision making includes but not limited to, conversation object is determined, determine to consult information, determine movement
Object determines searching object.
It is further preferred that action sequence includes but not limited to, facial expression, limb action, behavior act sets itself
State outwardly sends the instruction for controlling other objects, makes voice response.
In the present embodiment, in order to complete the content of interactive decision making, artificial virtual life needs are acted accordingly, these
Action includes all activities that artificial virtual life carries out during being interacted with the external world, and specifically, including but not limited to:
Make facial expression;Limb action is made, for example, raising one's hand, waving, shaking the head;Behavior act is made, such as is turned left, is turned right, is preceding
Into, run;Oneself state is set, such as opens included projection to achieve the purpose that augmented reality;It is other outwardly to send control
The instruction of object;Make voice response.
It is further preferred that artificial virtual life includes, the artificial virtual life of machine form, virtual shape existing for reality
Artificial virtual life existing for state.
It should be noted that in the present embodiment, the artificial virtual life for making action can be machine shape existing for reality
The artificial virtual life or artificial virtual life existing in the form of virtual of state, for example, it may be in the form of holographic
Displaying in front of the user, and can carry out the virtual portrait of the augmented reality of corresponding actions.
Preferably, steps in decision-making S3 is specifically included, according to external information and cognitive information, and according to the decision built in advance
Rule or decision model determine the interactive decision making for current scene;
Action step S4 is specifically included, and is determined and the matched action sequence of interactive decision making according to the mapping relations built in advance
Row, and acted according to action sequence, realize the interaction of artificial virtual life and external environment.
In the present embodiment, the problem of interactive decision making is a core is determined according to current scene.Optionally, it can build in advance
Decision rule or decision model, can be according to external information and cognitive information, and according to advance in this way, in practical scene
The decision rule or decision model of structure determine the interactive decision making for current scene.Specifically, according to artificial virtual life
Functional requirement is established by the decision rule of concrete condition to interactive decision making.Decision model is artificial intelligence model, and with a large amount of right
The perception information of a certain environment interactive decision making corresponding with the scene obtains final decision model as input, training.It needs
Illustrate, decision model can be the combination of multiple second-level models, for example, it may be facial expression recognition model, object are known
The combination of one or more of other model, Natural Language Processing Models etc..
In the present embodiment, the action that determining artificial virtual life by interactive decision making should take is one that this patent is realized
Core point optionally, can build mapping relations in advance, in this way, in practical scene, can be closed according to the mapping built in advance
System determines and the matched action sequence of interactive decision making, and is acted according to action sequence, realizes artificial virtual life and the external world
The interaction of environment.
Specifically, the building mode of mapping relations is,
Using interactive decision making and action sequence corresponding with interactive decision making as artificial intelligence model input, by constantly
Training and enhancing study, build the mapping relations between interactive decision making and action sequence;In the present embodiment, artificial intelligence model is used
Establish mapping.It records enough people and reaches the action that a certain interactive decision making is done, the number that then will be obtained under certain conditions
According to the input as artificial intelligence model, by constantly training and enhance study, finally obtain one and under certain conditions
Reach the action done required for certain decision.
And/or with reference to multidisciplinary expert system, the mapping relations by interactive decision making to action sequence are established by hand;With reference to
Expert system is established by hand by the expert of the subjects such as sociology, psychology, behaviouristics, micro- expression psychology by decision to action
The mapping of sequence or rule.
And/or using semi-supervised mode, based on the rule established by a small amount of expert, artificial intelligence model is corrected
The mapping of foundation, to build the mapping relations between interactive decision making and action sequence.Using semi-supervised method, built with a small amount of expert
Based on vertical rule, the mapping of correction artificial intelligence model foundation is gone.
Although present invention has been a degree of descriptions, it will be apparent that, do not departing from the spirit and scope of the present invention
Under the conditions of, the appropriate variation of each condition can be carried out.It is appreciated that the present invention is not limited to the embodiment, and it is attributed to right
It is required that range, include the equivalent replacement of each factor.
Claims (9)
1. a kind of system that artificial virtual life is interacted with the external world, which is characterized in that including,
Cognition module, for storing the cognitive information of artificial virtual life;
Sensing module, for perceiving the external information under current scene;
Decision-making module, for according to the external information and the cognitive information, determining the interactive decision making for current scene,
Action module for the determining and matched action sequence of the interactive decision making, and is acted according to the action sequence,
Realize the interaction of artificial virtual life and external environment.
2. the system that artificial virtual life according to claim 1 is interacted with the external world, which is characterized in that the sensing module
Including, visual unit, units,auditory, haptic unit, sense of taste unit, smell unit, spatial perception unit;Wherein,
The visual unit, the units,auditory, the haptic unit, the sense of taste unit and the smell unit it is hard
Part basis be camera, microphone, touch sensor, taste sensor, olfactory sensor;The hardware of the spatial perception unit
GPS is in basis, gyroscope.
3. the system that artificial virtual life according to claim 1 is interacted with the external world, which is characterized in that the action sequence
Including, facial expression, limb action, behavior act sets oneself state, outwardly sends the instruction for controlling other objects, does
Go out voice response.
4. the system that artificial virtual life according to claim 1 is interacted with the external world, which is characterized in that described artificial virtual
Life includes, the artificial virtual life of machine form, artificial virtual life existing for virtual form existing for reality.
5. the system that artificial virtual life according to claim 1 is interacted with the external world, which is characterized in that the decision-making module
It is specifically used for,
It determines to be directed to according to the external information and the cognitive information, and according to the decision rule or decision model built in advance
The interactive decision making of current scene;Wherein,
The decision rule is to be established according to the functional requirement of artificial virtual life, the rule from concrete scene to interactive decision making
Then information bank;
The decision model is, using the perception information to concrete scene and decision information corresponding with perception information as input,
The artificial intelligence model that training obtains.
6. the system that artificial virtual life according to claim 1 is interacted with the external world, which is characterized in that the action module
Be specifically used for, according to the mapping relations built in advance determine with the matched action sequence of the interactive decision making, and according to described dynamic
It is acted as sequence, realizes the interaction of artificial virtual life and external environment.
7. the system that artificial virtual life according to claim 6 is interacted with the external world, which is characterized in that the mapping relations
Building mode be,
Using interactive decision making and action sequence corresponding with interactive decision making as artificial intelligence model input, by constantly training
Learn with enhancing, build the mapping relations between interactive decision making and action sequence;
And/or with reference to multidisciplinary expert system, the mapping relations by interactive decision making to action sequence are established by hand;
And/or using semi-supervised mode, based on the rule established by a small amount of expert, correction artificial intelligence model is established
Mapping, to build the mapping relations between interactive decision making and action sequence.
8. a kind of method that artificial virtual life is interacted with the external world, which is characterized in that including,
Step is recognized, cognition module stores the cognitive information of artificial virtual life;
Step is perceived, sensing module perceives the external information under current scene;
Steps in decision-making, decision-making module determine that the interaction for current scene is determined according to the external information and the cognitive information
Plan,
Action step, action module determine with the matched action sequence of the interactive decision making, and according to the action sequence carry out
Action, realizes the interaction of artificial virtual life and external environment.
9. the method that artificial virtual life according to claim 8 is interacted with the external world, which is characterized in that
The steps in decision-making specifically includes, according to the external information and the cognitive information, and according to the decision built in advance
Rule or decision model determine the interactive decision making for current scene;
The action step specifically includes, and is determined and the matched action sequence of the interactive decision making according to the mapping relations built in advance
Row, and acted according to the action sequence, realize the interaction of artificial virtual life and external environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711337403.5A CN108133259A (en) | 2017-12-14 | 2017-12-14 | The system and method that artificial virtual life is interacted with the external world |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711337403.5A CN108133259A (en) | 2017-12-14 | 2017-12-14 | The system and method that artificial virtual life is interacted with the external world |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108133259A true CN108133259A (en) | 2018-06-08 |
Family
ID=62389574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711337403.5A Pending CN108133259A (en) | 2017-12-14 | 2017-12-14 | The system and method that artificial virtual life is interacted with the external world |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108133259A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685196A (en) * | 2018-12-13 | 2019-04-26 | 山东大学 | The autonomous cognitive development system and method for neural network and dynamic audiovisual fusion is associated with based on increment type |
CN111190479A (en) * | 2019-03-29 | 2020-05-22 | 码赫镭(上海)数字科技有限公司 | Embedded application system of intelligent terminal equipment |
CN112155485A (en) * | 2020-09-14 | 2021-01-01 | 江苏美的清洁电器股份有限公司 | Control method, control device, cleaning robot and storage medium |
CN112182173A (en) * | 2020-09-23 | 2021-01-05 | 支付宝(杭州)信息技术有限公司 | Human-computer interaction method and device based on virtual life and electronic equipment |
CN112734044A (en) * | 2020-11-26 | 2021-04-30 | 清华大学 | Man-machine symbiosis method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823551A (en) * | 2013-03-17 | 2014-05-28 | 浙江大学 | System and method for realizing multidimensional perception of virtual interaction |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
CN106022294A (en) * | 2016-06-01 | 2016-10-12 | 北京光年无限科技有限公司 | Intelligent robot-oriented man-machine interaction method and intelligent robot-oriented man-machine interaction device |
CN106663127A (en) * | 2016-07-07 | 2017-05-10 | 深圳狗尾草智能科技有限公司 | An interaction method and system for virtual robots and a robot |
CN106660209A (en) * | 2016-07-07 | 2017-05-10 | 深圳狗尾草智能科技有限公司 | Intelligent robot control system, method and intelligent robot |
CN107150347A (en) * | 2017-06-08 | 2017-09-12 | 华南理工大学 | Robot perception and understanding method based on man-machine collaboration |
-
2017
- 2017-12-14 CN CN201711337403.5A patent/CN108133259A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823551A (en) * | 2013-03-17 | 2014-05-28 | 浙江大学 | System and method for realizing multidimensional perception of virtual interaction |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
CN106022294A (en) * | 2016-06-01 | 2016-10-12 | 北京光年无限科技有限公司 | Intelligent robot-oriented man-machine interaction method and intelligent robot-oriented man-machine interaction device |
CN106663127A (en) * | 2016-07-07 | 2017-05-10 | 深圳狗尾草智能科技有限公司 | An interaction method and system for virtual robots and a robot |
CN106660209A (en) * | 2016-07-07 | 2017-05-10 | 深圳狗尾草智能科技有限公司 | Intelligent robot control system, method and intelligent robot |
CN107150347A (en) * | 2017-06-08 | 2017-09-12 | 华南理工大学 | Robot perception and understanding method based on man-machine collaboration |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685196A (en) * | 2018-12-13 | 2019-04-26 | 山东大学 | The autonomous cognitive development system and method for neural network and dynamic audiovisual fusion is associated with based on increment type |
CN111190479A (en) * | 2019-03-29 | 2020-05-22 | 码赫镭(上海)数字科技有限公司 | Embedded application system of intelligent terminal equipment |
CN112155485A (en) * | 2020-09-14 | 2021-01-01 | 江苏美的清洁电器股份有限公司 | Control method, control device, cleaning robot and storage medium |
CN112182173A (en) * | 2020-09-23 | 2021-01-05 | 支付宝(杭州)信息技术有限公司 | Human-computer interaction method and device based on virtual life and electronic equipment |
CN112734044A (en) * | 2020-11-26 | 2021-04-30 | 清华大学 | Man-machine symbiosis method and system |
CN112734044B (en) * | 2020-11-26 | 2023-08-01 | 清华大学 | Man-machine symbiotic method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108133259A (en) | The system and method that artificial virtual life is interacted with the external world | |
JP6816925B2 (en) | Data processing method and equipment for childcare robots | |
CN109432753B (en) | Action correcting method, device, storage medium and electronic equipment | |
JP2020064616A (en) | Virtual robot interaction method, device, storage medium, and electronic device | |
JP2019535055A (en) | Perform gesture-based operations | |
CN113946211A (en) | Method for interacting multiple objects based on metauniverse and related equipment | |
CN107679519A (en) | A kind of multi-modal interaction processing method and system based on visual human | |
US9690784B1 (en) | Culturally adaptive avatar simulator | |
EP3113113B1 (en) | Method and apparatus for freeform cutting of digital three dimensional structures | |
WO2018000267A1 (en) | Method for generating robot interaction content, system, and robot | |
CN108037825A (en) | The method and system that a kind of virtual idol technical ability is opened and deduced | |
US20230273685A1 (en) | Method and Arrangement for Handling Haptic Feedback | |
CN111383642A (en) | Voice response method based on neural network, storage medium and terminal equipment | |
Kang | Effect of interaction based on augmented context in immersive virtual reality environment | |
CN114904268A (en) | Virtual image adjusting method and device, electronic equipment and storage medium | |
WO2019087502A1 (en) | Information processing device, information processing method, and program | |
Kenyon et al. | Human augmentics: Augmenting human evolution | |
CN110822646B (en) | Control method of air conditioner, air conditioner and storage medium | |
CN110822649B (en) | Control method of air conditioner, air conditioner and storage medium | |
CN112684890A (en) | Physical examination guiding method and device, storage medium and electronic equipment | |
US20200082293A1 (en) | Party-specific environmental interface with artificial intelligence (ai) | |
CN108205372B (en) | Operation method and device applied to virtual reality equipment and virtual reality equipment | |
EP4375805A1 (en) | Audio output | |
KR102525661B1 (en) | Method for real-time training for remote control of working device and apparatus thereof | |
US20220405361A1 (en) | Systems and methods for correcting data to match user identity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 301, Building 39, 239 Renmin Road, Gusu District, Suzhou City, Jiangsu Province, 215000 Applicant after: SHENZHEN GOWILD ROBOTICS Co.,Ltd. Address before: 518000 Dongfang Science and Technology Building 1307-09, 16 Keyuan Road, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province Applicant before: SHENZHEN GOWILD ROBOTICS Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180608 |