CN108229640A - The method, apparatus and robot of emotion expression service - Google Patents

The method, apparatus and robot of emotion expression service Download PDF

Info

Publication number
CN108229640A
CN108229640A CN201611200796.0A CN201611200796A CN108229640A CN 108229640 A CN108229640 A CN 108229640A CN 201611200796 A CN201611200796 A CN 201611200796A CN 108229640 A CN108229640 A CN 108229640A
Authority
CN
China
Prior art keywords
information
emotional
feedback
layer
feedback information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611200796.0A
Other languages
Chinese (zh)
Other versions
CN108229640B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi YiTianXia Intelligent Technology Co.,Ltd.
Original Assignee
Shenzhen Guangqi Hezhong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangqi Hezhong Technology Co Ltd filed Critical Shenzhen Guangqi Hezhong Technology Co Ltd
Priority to CN201611200796.0A priority Critical patent/CN108229640B/en
Priority to PCT/CN2017/092037 priority patent/WO2018113260A1/en
Publication of CN108229640A publication Critical patent/CN108229640A/en
Application granted granted Critical
Publication of CN108229640B publication Critical patent/CN108229640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses the method, apparatus and robot of a kind of emotion expression service.Wherein, this method includes:The environmental stimuli information of input is generated and the corresponding emotional feedback information of environmental stimuli information according to the matching of preset neural network model;Emotional feedback information is parsed according to preset deeply learning model, obtains emotive response result corresponding with emotional feedback information;Export emotive response result.The present invention is solved since the emotional system of robot in the relevant technologies is that artificial setting condition is set for the feedback of mood, causes robot emotion learning efficiency low and mood individual character covers incomplete technical problem.

Description

The method, apparatus and robot of emotion expression service
Technical field
The present invention relates to application of electronic technology field, in particular to the method, apparatus and machine of a kind of emotion expression service People.
Background technology
Existing robot emotion system is absorbed in the calculating for calculating current instantaneous mood, and in view of the upper time Influence of the mood of point to current emotional.But retrieve the research and design developed about mood to individual character without any.Simultaneously Current mood metastasis model is all artificially to be provided with jump condition, when environmental stimuli reaches the jump condition of setting, mood It will shift.The model is not machine body study.Meanwhile the mode of the expression for related emotional, also all it is base In the good behavior aggregate of designer's designed in advance.
It is artificial setting condition for the feedback of mood for the above-mentioned emotional system due to robot in the relevant technologies Setting, causes robot emotion learning efficiency low and mood individual character covers incomplete problem, not yet proposes effective solution at present Certainly scheme.
Invention content
An embodiment of the present invention provides the method, apparatus and robot of a kind of emotion expression service, at least to solve due to correlation The emotional system of robot is that artificial setting condition is set for the feedback of mood in technology, leads to robot emotion study effect Rate is low and mood individual character covers incomplete technical problem.
One side according to embodiments of the present invention provides a kind of method of emotion expression service, including:To the external world of input Stimulus information is according to preset neural network model matching generation and the corresponding emotional feedback information of environmental stimuli information;Foundation Preset deeply learning model parses emotional feedback information, obtains emotive response result corresponding with emotional feedback information; Export emotive response result.
Optionally, the environmental stimuli information of input is believed according to preset neural network model matching generation with environmental stimuli The corresponding emotional feedback information of manner of breathing includes:By the multilayer neural network model being made of multiple monolayer neural networks models, It identifies environmental stimuli information, obtains emotional feedback information corresponding with environmental stimuli information.
Further, optionally, environmental stimuli information includes at least:Current ambient conditions, acoustic environment, visual environment or One kind in motion state.
Optionally, by the multilayer neural network model being made of multiple monolayer neural networks models, environmental stimuli is identified Information obtains the corresponding emotional feedback information of environmental stimuli information and includes:Environmental stimuli information is input to first layer nerve net Network model carries out data processing, obtains the output of first layer neural feedback;First layer neural feedback is input to second layer god Data processing is carried out through network model, obtains the output of second layer neural feedback, and by the output of second layer neural feedback according to step by step Neural network model carry out data processing after finally obtain emotional feedback information.
Further, optionally, neural network model includes:Y=w*x+b;Wherein, w is weight, and x is that every layer of nerve is anti- Feedback output, y are that every layer of neural feedback exports corresponding next layer of neural feedback output, the b amounts of being biased towards.
Optionally, weight is adjusted using back-propagating mode into Mobile state according to the type of every layer of neural network model.
Further, optionally, emotional feedback information is parsed according to preset deeply learning model, obtained and mood The corresponding emotive response result of feedback information includes:Emotional feedback information is inputted into preset deeply learning model;Foundation Reward Program parsing emotional feedback information in preset deeply learning model, obtains feelings corresponding with emotional feedback information Thread response results.
Optionally, according in preset deeply learning model Reward Program parse emotional feedback information, obtain with The corresponding emotive response result of emotional feedback information includes:Emotional feedback information is parsed according to Reward Program, be recompensed value;It is right Return value matches corresponding emotional feedback item;Emotional feedback item is determined as emotive response result.
Further, optionally, Reward Program includes:Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r(ai,s,t+n);Wherein, Q is long-term expected returns;R is current return, and ai is the action currently taken, and s+n is to work as State under preceding time n;T+n is time state n;P is discount rate, and a real number of the value of p in [0,1].
Another aspect according to embodiments of the present invention additionally provides a kind of device of emotion expression service, including:Matching module, It is corresponding with environmental stimuli information according to the matching generation of preset neural network model for the environmental stimuli information to input Emotional feedback information;Parsing module for parsing emotional feedback information according to preset deeply learning model, obtains and feelings The corresponding emotive response result of thread feedback information;Output module, for exporting emotive response result.
Optionally, matching module includes:Matching unit, for passing through the multilayer being made of multiple monolayer neural networks models Neural network model identifies environmental stimuli information, obtains emotional feedback information corresponding with environmental stimuli information.
Further, optionally, environmental stimuli information includes at least:Current ambient conditions, acoustic environment, visual environment or One kind in motion state.
Optionally, matching unit includes:Environmental stimuli information is input to first by the first data processing subelement for logical Layer neural network model carries out data processing, obtains the output of first layer neural feedback;Second data processing subelement, for by One layer of neural feedback is input to second layer neural network model and carries out data processing, obtains the output of second layer neural feedback, And emotional feedback letter will be finally obtained after the neural network model progress data processing of second layer neural feedback output foundation step by step Breath.
Further, optionally, neural network model includes:Y=w*x+b;Wherein, w is weight, and x is that every layer of nerve is anti- Feedback output, y are that every layer of neural feedback exports corresponding next layer of neural feedback output, the b amounts of being biased towards.
Optionally, weight is adjusted using back-propagating mode into Mobile state according to the type of every layer of neural network model.
Optionally, parsing module includes:Data receipt unit, for emotional feedback information to be inputted preset deeply Learning model;Resolution unit, for parsing emotional feedback information according to the Reward Program in preset deeply learning model, Obtain emotive response result corresponding with emotional feedback information.
Further, optionally, resolution unit includes:Parsing subunit, for parsing emotional feedback according to Reward Program Information, be recompensed value;Coupling subelement, for matching corresponding emotional feedback item to return value;As a result subelement is exported, is used In emotional feedback item is determined as emotive response result.
Optionally, Reward Program includes:Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r (ai, s, t+ n);Wherein, Q is long-term expected returns;R is current return, and ai is the action currently taken, and s+n is under current time n State;T+n is time state n;P is discount rate, and a real number of the value of p in [0,1].
Another aspect according to embodiments of the present invention, additionally provides a kind of robot, including:The device of emotion expression service, In, the device of emotion expression service includes:Above device.
In embodiments of the present invention, it is matched and given birth to according to preset neural network model by the environmental stimuli information to input Into with the corresponding emotional feedback information of environmental stimuli information;According to preset deeply learning model parsing emotional feedback letter Breath, obtains emotive response result corresponding with emotional feedback information;Output emotive response carries out automatically as a result, having reached robot The purpose of emotional learning, it is achieved thereby that the technique effect of hoisting machine people emotional learning, and then solve due to phase in all directions The emotional system of robot is that artificial setting condition is set for the feedback of mood in the technology of pass, and robot emotion is caused to learn Efficiency is low and mood individual character covers incomplete technical problem.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and forms the part of the application, this hair Bright illustrative embodiments and their description do not constitute improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is a kind of flow diagram of the method for emotion expression service according to embodiments of the present invention;
Fig. 2 is the flow diagram of the method for another emotion expression service according to embodiments of the present invention;
Fig. 3 is the structure diagram of the device of emotion expression service according to embodiments of the present invention.
Specific embodiment
In order to which those skilled in the art is made to more fully understand the present invention program, below in conjunction in the embodiment of the present invention The technical solution in the embodiment of the present invention is clearly and completely described in attached drawing, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people Member's all other embodiments obtained without making creative work should all belong to the model that the present invention protects It encloses.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, " Two " etc. be the object for distinguishing similar, and specific sequence or precedence are described without being used for.It should be appreciated that it uses in this way Data can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment Those steps or unit clearly listed, but may include not listing clearly or for these processes, method, product Or the intrinsic other steps of equipment or unit.
Embodiment 1
According to embodiments of the present invention, a kind of embodiment of the method for emotion expression service is provided, it should be noted that in attached drawing The step of flow illustrates can perform in the computer system of such as a group of computer-executable instructions, although also, Logical order is shown in flow chart, but in some cases, it can perform shown with the sequence being different from herein or retouch The step of stating.
Fig. 1 is a kind of flow diagram of the method for emotion expression service according to embodiments of the present invention, as shown in Figure 1, the party Method includes the following steps:
Step S102 matches generation and environmental stimuli to the environmental stimuli information of input according to preset neural network model The corresponding emotional feedback information of information;
Step S104 parses emotional feedback information according to preset deeply learning model, obtains believing with emotional feedback Cease corresponding emotive response result;
Step S106 exports emotive response result.
Wherein, emotive response result can at least include:The information that robot is expressed according to collected user, into market Thread is analyzed, and feeds back the mood of the information of corresponding user expression, such as:Pleasure, anger, sorrow, happiness, fear, it is tranquil, detest.
The method of emotion expression service provided by the embodiments of the present application can be adapted for robot and learn mood automatically, avoid people Work, which imposes a condition, limits the incomplete problem generation of robot emotion study.
In embodiments of the present invention, it is matched and given birth to according to preset neural network model by the environmental stimuli information to input Into with the corresponding emotional feedback information of environmental stimuli information;According to preset deeply learning model parsing emotional feedback letter Breath, obtains emotive response result corresponding with emotional feedback information;Output emotive response carries out automatically as a result, having reached robot The purpose of emotional learning, it is achieved thereby that the technique effect of hoisting machine people emotional learning, and then solve due to phase in all directions The emotional system of robot is that artificial setting condition is set for the feedback of mood in the technology of pass, and robot emotion is caused to learn Efficiency is low and mood individual character covers incomplete technical problem.
Optionally, in step S102 to the environmental stimuli information of input according to preset neural network model matching generation with The corresponding emotional feedback information of environmental stimuli information includes:
Step1 passes through the multilayer neural network model being made of multiple monolayer neural networks models, identification environmental stimuli letter Breath, obtains emotional feedback information corresponding with environmental stimuli information.
Further, optionally, environmental stimuli information includes at least:Current ambient conditions, acoustic environment, visual environment or One kind in motion state.
Optionally, the multilayer nerve net in the Step1 in step S102 by being made of multiple monolayer neural networks models Network model identifies environmental stimuli information, obtains the corresponding emotional feedback information of environmental stimuli information and include:
Environmental stimuli information is input to first layer neural network model and carries out data processing, obtains first layer god by step A It is exported through feedback;
First layer neural feedback is input to second layer neural network model and carries out data processing, obtains the by step B The output of two layers of neural feedback, and the output of second layer neural feedback is carried out after data processing most according to neural network model step by step Emotional feedback information is obtained eventually.
Further, optionally, neural network model includes:Y=w*x+b;Wherein, w is weight, and x is that every layer of nerve is anti- Feedback output, y are that every layer of neural feedback exports corresponding next layer of neural feedback output, the b amounts of being biased towards.
Specifically, input of the output of every layer of neural feedback as next layer of neural feedback, so that outer by what is received It connects stimulus information to be handled, obtains during corresponding emotional feedback information from simple to complex, successively carrying out at data Reason.
Wherein, the weight is adjusted using back-propagating mode into Mobile state according to the type of every layer of neural network model.
In the method for emotion expression service provided by the embodiments of the present application, by way of back-propagation, calculate there is currently Error, so as to adjust every layer of weight w, the weight w of every layer of adjustment promotes data processing precision.
Optionally, emotional feedback information is parsed according to preset deeply learning model in step S104, obtained and feelings The corresponding emotive response result of thread feedback information includes:
Step1 parses emotional feedback information according to preset deeply learning model, obtains and emotional feedback information pair The emotive response result answered;
Step2 parses emotional feedback information according to the Reward Program in preset deeply learning model, obtains and feelings The corresponding emotive response result of thread feedback information.
Optionally, it is parsed in the Step2 in step S104 according to the Reward Program in preset deeply learning model Emotional feedback information obtains the corresponding emotive response result of emotional feedback information and includes:
Step A parses emotional feedback information according to Reward Program, and be recompensed value;
Step B matches corresponding emotional feedback item to return value;
Emotional feedback item is determined as emotive response result by step C.
Further, optionally, Reward Program includes:
Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r (ai, s, t+n);
Wherein, Q is long-term expected returns;R is current return, and ai is the action currently taken, and s+n is when current Between state under n;T+n is time state n;P is discount rate, and a real number of the value of p in [0,1].
Pass through above-mentioned Reward Program meter in the process of back-propagating in the method for emotion expression service provided by the embodiments of the present application Obtained return value, and then compared by the return value and practical return value, the return value after being corrected, so as to Optimize every layer of weight w, promotion obtains the precision of emotive response result.
To sum up, Fig. 2 is the flow diagram of the method for another emotion expression service according to embodiments of the present invention, such as Fig. 2 institutes Show, the method specific implementation of emotion expression service provided in an embodiment of the present invention is as follows:
The research of traditional mood model is the conversion regime and transition probability of mood mostly, according to the feelings after transfer Thread exports corresponding behavior aggregate.
The core of the method for emotion expression service provided in an embodiment of the present invention is to design a mathematical model, outer for identifying How sector signal generates emotional change corresponding stimulation.When stimulate generate after, we will design a hormone model, for pair The control of each manual expression of body.So that emotional change caused by finally realizing different degrees of environmental stimuli, has just When, have life entity " individual character " manual expression.
Step1, in such a system, it is to utilize depth god that how identification outer signals, which generate emotional change corresponding stimulation, The responsive state that the mathematical model of an End-to-End is used for simulating brain is established through network.Extraneous stimulation includes current Environmental factor (such as weather, temperature etc.), language environment (for example whether understanding being exchanged with people and to exchanging content), Visual environment (such as whetheing there is thing for seeing oneself hobby etc.), all kinds of external worlds such as motion state may cause shadow to current emotional Loud factor.Output includes which classification is current mood be more likely to.Wherein, current emotional includes pleasure, anger, sorrow, happiness, fears Fear, it is tranquil, detest.The optimization process of the model is mainly using back-propagating method come Optimized model weight.One typical individual layer (Deep Neural Network, abbreviation DNN) Artificial Neural Network Structures mathematical formulae is as follows:
Y=w*x+b
Wherein w is that a weight of the network layer is put to the proof, and x is to input, the b amounts of being biased towards.
By multilayer, the single layer network is formed one deep neural network DNN.The output of wherein last layer is next layer defeated Enter.
Step2, the expression for related emotional, this system employ deeply learning model Deep Q-Learning To establish hormone expression mechanism so that robot can independently go study under emotional state caused by different environmental stimulis, It makes the later caused degree of recognition of some manual expression collection or is known as return degree.Robot can constantly make according to environment Corresponding emotion expression service, with reference to the degree of recognition that some or certain testers express it, robot will gradually have some Or the synthesis personality of certain testers.The Reward Program design of one typical Deep Q-Learning is as follows:
Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r (ai, s, t+n)
Wherein Q refers to a long-term expected returns.R refers to current return, and ai refers to the action currently taken, s+n refer to again when State under preceding time n, t+n, it is a discount rate to refer to time state n, p, a real number between 0-1.
This two parts mathematical model constitutes the core of entire mood transfer and expression model.Wherein realize that part is first It first needs the two mathematical models being loaded into calculator memory either in a circuit board memory by circuit board or calculating CPU or GPU numerical operation is carried out to model, to realize that collected external data carries out numerical analysis to sensor.Analysis Result afterwards as output most and will revert to software systems in the form of floating number, and software systems logarithm result is known Not, the content that the transfering state of mood and needs are expressed finally is determined.
Embodiment 2
Fig. 3 is the structure diagram of the device of emotion expression service according to embodiments of the present invention, as shown in figure 3, the device packet It includes:
Matching module 32, for the environmental stimuli information to input according to preset neural network model matching generation and outside The corresponding emotional feedback information of boundary's stimulus information;Parsing module 34, for being parsed according to preset deeply learning model Emotional feedback information obtains emotive response result corresponding with emotional feedback information;Output module 36, for exporting emotive response As a result.
In embodiments of the present invention, it is matched and given birth to according to preset neural network model by the environmental stimuli information to input Into with the corresponding emotional feedback information of environmental stimuli information;According to preset deeply learning model parsing emotional feedback letter Breath, obtains emotive response result corresponding with emotional feedback information;Output emotive response carries out automatically as a result, having reached robot The purpose of emotional learning, it is achieved thereby that the technique effect of hoisting machine people emotional learning, and then solve due to phase in all directions The emotional system of robot is that artificial setting condition is set for the feedback of mood in the technology of pass, and robot emotion is caused to learn Efficiency is low and mood individual character covers incomplete technical problem.
Optionally, matching module 32 includes:Matching unit, for more by being made of multiple monolayer neural networks models Layer neural network model, identifies environmental stimuli information, obtains emotional feedback information corresponding with environmental stimuli information.
Further, optionally, environmental stimuli information includes at least:Current ambient conditions, acoustic environment, visual environment or One kind in motion state.
Optionally, matching unit includes:Environmental stimuli information is input to first by the first data processing subelement for logical Layer neural network model carries out data processing, obtains the output of first layer neural feedback;Second data processing subelement, for by One layer of neural feedback is input to second layer neural network model and carries out data processing, obtains the output of second layer neural feedback, And emotional feedback letter will be finally obtained after the neural network model progress data processing of second layer neural feedback output foundation step by step Breath.
Further, optionally, neural network model includes:Y=w*x+b;Wherein, w is weight, and x is that every layer of nerve is anti- Feedback output, y are that every layer of neural feedback exports corresponding next layer of neural feedback output, the b amounts of being biased towards.
Optionally, weight is adjusted using back-propagating mode into Mobile state according to the type of every layer of neural network model.
Optionally, parsing module 34 includes:Data receipt unit is strong for emotional feedback information to be inputted preset depth Change learning model;Resolution unit, for according to the Reward Program parsing emotional feedback letter in preset deeply learning model Breath, obtains emotive response result corresponding with emotional feedback information.
Further, optionally, resolution unit includes:Parsing subunit, for parsing emotional feedback according to Reward Program Information, be recompensed value;Coupling subelement, for matching corresponding emotional feedback item to return value;As a result subelement is exported, is used In emotional feedback item is determined as emotive response result.
Optionally, Reward Program includes:Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r (ai, s, t+ n);Wherein, Q is long-term expected returns;R is current return, and ai is the action currently taken, and s+n is under current time n State;T+n is time state n;P is discount rate, and a real number of the value of p in [0,1].
Embodiment 3
Another aspect according to embodiments of the present invention, additionally provides a kind of robot, including:The device of emotion expression service, In, the device of emotion expression service includes:The device of emotion expression service shown in Fig. 3.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
In the above embodiment of the present invention, all emphasize particularly on different fields to the description of each embodiment, do not have in some embodiment The part of detailed description may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, Ke Yiwei A kind of division of logic function, can there is an other dividing mode in actual implementation, for example, multiple units or component can combine or Person is desirably integrated into another system or some features can be ignored or does not perform.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module It connects, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit The component shown may or may not be physical unit, you can be located at a place or can also be distributed to multiple On unit.Some or all of unit therein can be selected according to the actual needs to realize the purpose of this embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also That each unit is individually physically present, can also two or more units integrate in a unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is independent product sale or uses When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially The part to contribute in other words to the prior art or all or part of the technical solution can be in the form of software products It embodies, which is stored in a storage medium, is used including some instructions so that a computer Equipment (can be personal computer, server or network equipment etc.) perform each embodiment the method for the present invention whole or Part steps.And aforementioned storage medium includes:USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can to store program code Medium.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (19)

  1. A kind of 1. method of emotion expression service, which is characterized in that including:
    It is opposite with the environmental stimuli information according to the matching generation of preset neural network model to the environmental stimuli information of input The emotional feedback information answered;
    The emotional feedback information is parsed according to preset deeply learning model, is obtained corresponding with the emotional feedback information Emotive response result;
    Export the emotive response result.
  2. 2. according to the method described in claim 1, it is characterized in that, the environmental stimuli information of described pair of input is according to preset god Include through network model matching generation with the corresponding emotional feedback information of the environmental stimuli information:
    By the multilayer neural network model being made of multiple monolayer neural networks models, identify the environmental stimuli information, obtain To the emotional feedback information corresponding with the environmental stimuli information.
  3. 3. according to the method described in claim 2, it is characterized in that, the environmental stimuli information includes at least:Current environment shape One kind in state, acoustic environment, visual environment or motion state.
  4. It is 4. according to the method described in claim 2, it is characterized in that, described by being made of multiple monolayer neural networks models Multilayer neural network model identifies the environmental stimuli information, obtains the corresponding emotional feedback of the environmental stimuli information Information includes:
    The environmental stimuli information is input to first layer neural network model and carries out data processing, obtains first layer neural feedback Output;
    The first layer neural feedback is input to second layer neural network model and carries out data processing, obtains second layer god It exports through feedback, and is finally obtained after the output of second layer neural feedback is carried out data processing according to neural network model step by step The emotional feedback information.
  5. 5. according to the method described in claim 4, it is characterized in that, the neural network model includes:
    Y=w*x+b;
    Wherein, w is weight, and x is every layer of neural feedback output, and y is that every layer of neural feedback exports corresponding next layer of nerve Feedback output, the b amounts of being biased towards.
  6. 6. according to the method described in claim 5, it is characterized in that, the type according to every layer of neural network model is to the weight It is adjusted using back-propagating mode into Mobile state.
  7. 7. method according to any one of claim 1 to 6, which is characterized in that described according to preset deeply Emotional feedback information described in practising model analyzing, obtains emotive response result corresponding with the emotional feedback information and includes:
    The emotional feedback information is inputted into the preset deeply learning model;
    Parse the emotional feedback information according to the Reward Program in the preset deeply learning model, obtain with it is described The corresponding emotive response result of emotional feedback information.
  8. It is 8. the method according to the description of claim 7 is characterized in that described according in the preset deeply learning model Reward Program parse the emotional feedback information, obtain the emotive response result packet corresponding with the emotional feedback information It includes:
    The emotional feedback information is parsed according to the Reward Program, be recompensed value;
    Corresponding emotional feedback item is matched to the return value;
    The emotional feedback item is determined as the emotive response result.
  9. 9. according to the method described in claim 8, it is characterized in that, the Reward Program includes:
    Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r (ai, s, t+n);
    Wherein, Q is long-term expected returns;R is current return, and ai is the action currently taken, and s+n is in current time n Under state;T+n is time state n;P is discount rate, and a real number of the value of p in [0,1].
  10. 10. a kind of device of emotion expression service, which is characterized in that including:
    Matching module, for the environmental stimuli information to input according to preset neural network model matching generation and the external world The corresponding emotional feedback information of stimulus information;
    Parsing module for parsing the emotional feedback information according to preset deeply learning model, obtains and the feelings The corresponding emotive response result of thread feedback information;
    Output module, for exporting the emotive response result.
  11. 11. device according to claim 10, which is characterized in that the matching module includes:
    Matching unit, for by the multilayer neural network model being made of multiple monolayer neural networks models, identifying described outer Boundary's stimulus information obtains the emotional feedback information corresponding with the environmental stimuli information.
  12. 12. according to the devices described in claim 11, which is characterized in that the environmental stimuli information includes at least:Current environment One kind in state, acoustic environment, visual environment or motion state.
  13. 13. according to the devices described in claim 11, which is characterized in that the matching unit includes:
    The environmental stimuli information is input to first layer neural network model into line number by the first data processing subelement for logical According to processing, the output of first layer neural feedback is obtained;
    Second data processing subelement, for by the first layer neural feedback be input to second layer neural network model into Row data processing obtains the output of second layer neural feedback, and by the output of second layer neural feedback according to neural network mould step by step Type finally obtains the emotional feedback information after carrying out data processing.
  14. 14. device according to claim 13, which is characterized in that the neural network model includes:
    Y=w*x+b;
    Wherein, w is weight, and x is every layer of neural feedback output, and y is that every layer of neural feedback exports corresponding next layer of nerve Feedback output, the b amounts of being biased towards.
  15. 15. device according to claim 14, which is characterized in that the type according to every layer of neural network model is to the power It is adjusted again using back-propagating mode into Mobile state.
  16. 16. the device according to any one of claim 10 to 15, which is characterized in that the parsing module includes:
    Data receipt unit, for the emotional feedback information to be inputted the preset deeply learning model;
    Resolution unit, for parsing the emotional feedback letter according to the Reward Program in the preset deeply learning model Breath, obtains the emotive response result corresponding with the emotional feedback information.
  17. 17. device according to claim 16, which is characterized in that the resolution unit includes:
    Parsing subunit, for parsing the emotional feedback information according to the Reward Program, be recompensed value;
    Coupling subelement, for matching corresponding emotional feedback item to the return value;
    As a result subelement is exported, for the emotional feedback item to be determined as the emotive response result.
  18. 18. device according to claim 17, which is characterized in that the Reward Program includes:
    Q=r (ai, s, t)+p*r (ai, s+1, t+1)+...+p^ (n-1) r (ai, s, t+n);
    Wherein, Q is long-term expected returns;R is current return, and ai is the action currently taken, and s+n is in current time n Under state;T+n is time state n;P is discount rate, and a real number of the value of p in [0,1].
  19. 19. a kind of robot, which is characterized in that including:The device of emotion expression service, wherein, the device of the emotion expression service includes: Device described in any one of claim 10 to 18.
CN201611200796.0A 2016-12-22 2016-12-22 Emotion expression method and device and robot Active CN108229640B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611200796.0A CN108229640B (en) 2016-12-22 2016-12-22 Emotion expression method and device and robot
PCT/CN2017/092037 WO2018113260A1 (en) 2016-12-22 2017-07-06 Emotional expression method and device, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611200796.0A CN108229640B (en) 2016-12-22 2016-12-22 Emotion expression method and device and robot

Publications (2)

Publication Number Publication Date
CN108229640A true CN108229640A (en) 2018-06-29
CN108229640B CN108229640B (en) 2021-08-20

Family

ID=62624719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611200796.0A Active CN108229640B (en) 2016-12-22 2016-12-22 Emotion expression method and device and robot

Country Status (2)

Country Link
CN (1) CN108229640B (en)
WO (1) WO2018113260A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240488A (en) * 2018-07-27 2019-01-18 重庆柚瓣家科技有限公司 A kind of implementation method of AI scene engine of positioning
CN112788990A (en) * 2018-09-28 2021-05-11 三星电子株式会社 Electronic device and method for obtaining emotion information

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210012730A (en) 2019-07-26 2021-02-03 삼성전자주식회사 Learning method of artificial intelligence model and electronic apparatus
US20220284649A1 (en) * 2021-03-06 2022-09-08 Artificial Intelligence Foundation, Inc. Virtual Representation with Dynamic and Realistic Behavioral and Emotional Responses

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1297393A (en) * 1999-01-20 2001-05-30 索尼公司 Robot device and motion control method
US20040036437A1 (en) * 2001-04-03 2004-02-26 Masato Ito Legged mobile robot and its motion teaching method, and storage medium
US20040243281A1 (en) * 2002-03-15 2004-12-02 Masahiro Fujita Robot behavior control system, behavior control method, and robot device
JP2005199403A (en) * 2004-01-16 2005-07-28 Sony Corp Emotion recognition device and method, emotion recognition method of robot device, learning method of robot device and robot device
CN201814558U (en) * 2010-05-18 2011-05-04 上海理工大学 Real-time emotion detection system for information science and man-machine interactive system
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service
CN102402712A (en) * 2011-08-31 2012-04-04 山东大学 Robot reinforced learning initialization method based on neural network
CN103729459A (en) * 2014-01-10 2014-04-16 北京邮电大学 Method for establishing sentiment classification model
US8909370B2 (en) * 2007-05-08 2014-12-09 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN105511608A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Intelligent robot based interaction method and device, and intelligent robot
CN105913039A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Visual-and-vocal sense based dialogue data interactive processing method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002370183A (en) * 2001-06-15 2002-12-24 Yamaha Motor Co Ltd Monitor and monitoring system
CN102063640B (en) * 2010-11-29 2013-01-30 北京航空航天大学 Robot behavior learning model based on utility differential network
CN102438064A (en) * 2011-09-28 2012-05-02 宇龙计算机通信科技(深圳)有限公司 Emotion expression method and system of mobile terminal and mobile terminal
CN103218654A (en) * 2012-01-20 2013-07-24 沈阳新松机器人自动化股份有限公司 Robot emotion generating and expressing system
CN106096717B (en) * 2016-06-03 2018-08-14 北京光年无限科技有限公司 Information processing method towards intelligent robot and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1297393A (en) * 1999-01-20 2001-05-30 索尼公司 Robot device and motion control method
US20040036437A1 (en) * 2001-04-03 2004-02-26 Masato Ito Legged mobile robot and its motion teaching method, and storage medium
US20040243281A1 (en) * 2002-03-15 2004-12-02 Masahiro Fujita Robot behavior control system, behavior control method, and robot device
JP2005199403A (en) * 2004-01-16 2005-07-28 Sony Corp Emotion recognition device and method, emotion recognition method of robot device, learning method of robot device and robot device
US8909370B2 (en) * 2007-05-08 2014-12-09 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN201814558U (en) * 2010-05-18 2011-05-04 上海理工大学 Real-time emotion detection system for information science and man-machine interactive system
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service
CN102402712A (en) * 2011-08-31 2012-04-04 山东大学 Robot reinforced learning initialization method based on neural network
CN103729459A (en) * 2014-01-10 2014-04-16 北京邮电大学 Method for establishing sentiment classification model
CN105511608A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Intelligent robot based interaction method and device, and intelligent robot
CN105913039A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Visual-and-vocal sense based dialogue data interactive processing method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王玉友: "《基于情绪认知评价理论的人机交互中情感交互研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109240488A (en) * 2018-07-27 2019-01-18 重庆柚瓣家科技有限公司 A kind of implementation method of AI scene engine of positioning
CN112788990A (en) * 2018-09-28 2021-05-11 三星电子株式会社 Electronic device and method for obtaining emotion information

Also Published As

Publication number Publication date
CN108229640B (en) 2021-08-20
WO2018113260A1 (en) 2018-06-28

Similar Documents

Publication Publication Date Title
CN108446374B (en) User's Intention Anticipation method, apparatus, electronic equipment, storage medium
CN110083700A (en) A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks
CN110147711A (en) Video scene recognition methods, device, storage medium and electronic device
CN109816009A (en) Multi-tag image classification method, device and equipment based on picture scroll product
CN107358169A (en) A kind of facial expression recognizing method and expression recognition device
CN109325547A (en) Non-motor vehicle image multi-tag classification method, system, equipment and storage medium
CN110046952A (en) A kind of training method and device, a kind of recommended method and device of recommended models
CN106709461A (en) Video based behavior recognition method and device
KR20180125905A (en) Method and apparatus for classifying a class to which a sentence belongs by using deep neural network
CN108427708A (en) Data processing method, device, storage medium and electronic device
CN107358293A (en) A kind of neural network training method and device
CN109271493A (en) A kind of language text processing method, device and storage medium
CN108229640A (en) The method, apparatus and robot of emotion expression service
CN108319599A (en) A kind of interactive method and apparatus
CN111176758B (en) Configuration parameter recommendation method and device, terminal and storage medium
CN107066445A (en) The deep learning method of one attribute emotion word vector
CN108665064A (en) Neural network model training, object recommendation method and device
CN106529503A (en) Method for recognizing face emotion by using integrated convolutional neural network
CN107526831A (en) A kind of natural language processing method and apparatus
CN109800413A (en) Recognition methods, device, equipment and the readable storage medium storing program for executing of media event
CN107784316A (en) A kind of image-recognizing method, device, system and computing device
CN103930907B (en) A kind of method and apparatus of nerve setup memory transfer
CN110413755A (en) A kind of extending method, device and server, the storage medium in question and answer library
CN109344888A (en) A kind of image-recognizing method based on convolutional neural networks, device and equipment
CN109948458A (en) Pet personal identification method, device, equipment and storage medium based on noseprint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210730

Address after: Room 832, 8 / F, Guoyao building, No.2, Longsheng street, Tanghuai Park, Taiyuan City

Applicant after: Shanxi YiTianXia Intelligent Technology Co.,Ltd.

Address before: 518000 Guangdong, Shenzhen, Nanshan District, Nanhai Road, West Guangxi Temple Road North Sunshine Huayi Building 1 15D-02F

Applicant before: SHEN ZHEN KUANG-CHI HEZHONG TECHNOLOGY Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant