CN108614987A - The method, apparatus and robot of data processing - Google Patents

The method, apparatus and robot of data processing Download PDF

Info

Publication number
CN108614987A
CN108614987A CN201611149247.5A CN201611149247A CN108614987A CN 108614987 A CN108614987 A CN 108614987A CN 201611149247 A CN201611149247 A CN 201611149247A CN 108614987 A CN108614987 A CN 108614987A
Authority
CN
China
Prior art keywords
information
emotional
detected
acoustic
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611149247.5A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kuang Chi Hezhong Technology Ltd
Shenzhen Guangqi Hezhong Technology Co Ltd
Original Assignee
Shenzhen Guangqi Hezhong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangqi Hezhong Technology Co Ltd filed Critical Shenzhen Guangqi Hezhong Technology Co Ltd
Priority to CN201611149247.5A priority Critical patent/CN108614987A/en
Priority to PCT/CN2017/092036 priority patent/WO2018107731A1/en
Publication of CN108614987A publication Critical patent/CN108614987A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of method, apparatus of data processing and robots.Wherein, this method includes:Acquire the environmental information residing for object to be detected, wherein environmental information includes:Image information and acoustic information;Image information image information corresponding with the emotional information in presetting database is compared, and acoustic information acoustic information corresponding with the emotional information in presetting database is compared;The emotional information of screening and object matching to be detected, wherein emotional information is used to indicate robot and executes respective operations;Interactive mode is adjusted according to emotional information.The present invention is solved due to lacking the detection to micro- expression, judgement and corresponding processing method in the related technology, and information states the low technical problem of accuracy during leading to interactive process or person-to-person communication.

Description

The method, apparatus and robot of data processing
Technical field
The present invention relates to application of electronic technology field, in particular to the method, apparatus and machine of a kind of data processing People.
Background technology
About with micro- expression and microlanguage come judge mood just propose (Emotient Inc, Affectiva Inc and Micro- Expression Recognition system of Eyeris exploitations), the two is in conjunction with to judge mood, there is no realize;Meanwhile the basis based on mood On guess that the specific idea of other side is also not carried out.
In actual life, for people in order to achieve the purpose that oneself often to may require that implementation deception, this just needs people couple Itself true emotion and impression carry out oppressive and hiding.However these are constrained and the sense of reality hidden is sometimes with one The form of the very quick facial expression of kind is expressed, and this expression is referred to as micro- expression.Due to expressing the true of people Sincere figure, micro- expression become detection lie and the dangerous strong behavior clue being intended to.However the expression speed of micro- expression is very Soon, people is difficult to accurately identify micro- expression.Existing research shows that being to carry out in generic expression identification process septum reset feedback information A kind of effective inherent clue of Expression Recognition.As a kind of special shape of human facial expressions, researcher is current and does not know about Effect of face feedback during micro- Expression Recognition.Answer to this problem will deepen researcher to micro- Expression Recognition mistake The understanding of journey, the application for micro- expression in fields such as clinical, judicial and anti-terrorisms are provided fundamental basis.
For above-mentioned due to lacking the detection to micro- expression, judgement and corresponding processing method in the related technology, cause The low problem of information statement accuracy, not yet proposes effective solution side at present during interactive process or person-to-person communication Case.
Invention content
An embodiment of the present invention provides a kind of method, apparatus of data processing and robots, at least to solve due to correlation Lack the detection to micro- expression, judgement and corresponding processing method in technology, leads to interactive process or person-to-person communication The low technical problem of information statement accuracy in the process.
One side according to the ... of the embodiment of the present invention provides a kind of method of data processing, including:It is to be detected right to acquire As residing environmental information, wherein environmental information includes:Image information and acoustic information;By image information and presetting database In the corresponding image information of emotional information be compared, and it is acoustic information is corresponding with the emotional information in presetting database Acoustic information is compared;The emotional information of screening and object matching to be detected, wherein emotional information is used to indicate robot and holds Row respective operations;Interactive mode is adjusted according to emotional information.
Optionally, acquiring the environmental information residing for object to be detected includes:It is to be detected right to be acquired by image identification system At least one of limbs behavior act, facial expression action and thermal imaging data of elephant;It is acquired by sound recognition system The background sound of environment residing for the sound of object to be detected and/or object to be detected;Wherein, sound includes:Tone, loudness, Information entrained by audible frequency and sound.
Further, optionally, image identification system includes:Micro- Expression Recognition system;Sound recognition system includes:Micro- language Say identifying system.
Optionally, image information image information corresponding with the emotional information in presetting database is compared, and will Acoustic information acoustic information corresponding with the emotional information in presetting database be compared including:In the preset database to figure As information and acoustic information matching image information corresponding with image information and acoustic information and acoustic information, at least one set is obtained Image information and acoustic information.
Further, optionally, screening and the emotional information of object matching to be detected include:Calculate at least one set of image letter The similarity of breath and acoustic information and the image collected information and acoustic information;It extracts similarity and is more than predetermined threshold value at least One group image information and the corresponding emotional information of acoustic information, obtain the emotional information with object matching to be detected.
Further, optionally, include according to emotional information adjustment interactive mode:By default mood model at least one Group image information and the corresponding emotional information of acoustic information are handled, and the corresponding interactive mode of emotional information is obtained;Output is handed over Mutual mode, and acquire the environmental information that object to be detected is presently in;Interactive mode is corrected according to the environmental information being presently in, Until obtaining interaction results corresponding with object to be detected;Wherein, by default mood model at least one set of image information and The corresponding emotional information of acoustic information carries out processing:Emotional information is judged or is predicted by default mood model, Obtain the interactive mode of corresponding emotional information.
Other side according to the ... of the embodiment of the present invention provides a kind of device of data processing, including:Acquisition module, For acquiring the environmental information residing for object to be detected, wherein environmental information includes:Image information and acoustic information;Match mould Block, for image information image information corresponding with the emotional information in presetting database to be compared, and by acoustic information Acoustic information corresponding with the emotional information in presetting database is compared;Screening module, for screening and object to be detected Matched emotional information;Correction module, for adjusting interactive mode according to emotional information.
Optionally, acquisition module includes:First collecting unit, for acquiring object to be detected by image identification system At least one of limbs behavior act, facial expression action and thermal imaging data;Second collecting unit, for passing through sound Identifying system acquires the sound of object to be detected and/or the background sound of the environment residing for object to be detected;Wherein, sound packet It includes:Information entrained by tone, loudness, audible frequency and sound.
Further, optionally, image identification system includes:Micro- Expression Recognition system;Sound recognition system includes:Micro- language Say identifying system.
Optionally, matching module includes:Matching unit, in the preset database to image information and acoustic information With image information corresponding with image information and acoustic information and acoustic information, at least one set of image information harmony message is obtained Breath.
Further, optionally, screening module includes:Computing unit, for calculating at least one set of image information and sound The similarity of information and the image collected information and acoustic information;Extraction unit is more than predetermined threshold value for extracting similarity At least one set of image information and the corresponding emotional information of acoustic information, obtain the emotional information with object matching to be detected.
Further, optionally, correction module includes:Adjustment unit, for by presetting mood model at least one set Image information and the corresponding emotional information of acoustic information are handled, and the corresponding interactive mode of emotional information is obtained;Collecting unit, For exporting interactive mode, and acquire the environmental information that object to be detected is presently in;Unit is corrected, is presently in for foundation Environmental information correct interactive mode, until obtain interaction results corresponding with object to be detected;Wherein, by presetting mood mould Type carries out processing at least one set of image information and the corresponding emotional information of acoustic information:By default mood model to feelings Thread information is judged or is predicted, the interactive mode of corresponding emotional information is obtained.
Another aspect according to the ... of the embodiment of the present invention provides a kind of robot, including:A kind of above-mentioned data processing Device.
In embodiments of the present invention, pass through the environmental information acquired residing for object to be detected, wherein environmental information includes: Image information and acoustic information;Image information image information corresponding with the emotional information in presetting database is compared, And acoustic information acoustic information corresponding with the emotional information in presetting database is compared;Screening and object to be detected The emotional information matched, wherein emotional information is used to indicate robot and executes respective operations;Interaction side is adjusted according to emotional information Formula has achieved the purpose that, to micro- expression Accurate Prediction, to realize the technique effect of the accurately detection to micro- expression, and then to solve It has determined due to lacking the detection to micro- expression, judgement and corresponding processing method in the related technology, has led to interactive process Or the low technical problem of information statement accuracy during person-to-person communication.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and is constituted part of this application, this hair Bright illustrative embodiments and their description are not constituted improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is the flow diagram of the method for data processing according to the ... of the embodiment of the present invention;
Fig. 2 is a kind of flow diagram of the method for data processing according to the ... of the embodiment of the present invention;
Fig. 3 is the flow diagram of the device of data processing according to the ... of the embodiment of the present invention.
Specific implementation mode
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The every other embodiment that member is obtained without making creative work should all belong to the model that the present invention protects It encloses.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, " Two " etc. be for distinguishing similar object, without being used to describe specific sequence or precedence.It should be appreciated that using in this way Data can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover It includes to be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment to cover non-exclusive Those of clearly list step or unit, but may include not listing clearly or for these processes, method, product Or the other steps or unit that equipment is intrinsic.
Embodiment 1
According to embodiments of the present invention, a kind of embodiment of the method for data processing is provided, it should be noted that in attached drawing The step of flow illustrates can execute in the computer system of such as a group of computer-executable instructions, although also, Logical order is shown in flow chart, but in some cases, it can be to execute shown different from sequence herein or retouch The step of stating.
Fig. 1 is the flow diagram of the method for data processing according to the ... of the embodiment of the present invention, as shown in Figure 1, this method packet Include following steps:
Step S102 acquires the environmental information residing for object to be detected, wherein environmental information includes:Image information harmony Message ceases;
Image information image information corresponding with the emotional information in presetting database is compared step S104, and Acoustic information acoustic information corresponding with the emotional information in presetting database is compared;
Step S106, the emotional information of screening and object matching to be detected, wherein emotional information is used to indicate robot and holds Row respective operations;
Step S108 adjusts interactive mode according to emotional information.
Through the above steps, it may be implemented to acquire the environmental information residing for object to be detected, wherein environmental information includes: Image information and acoustic information;Image information image information corresponding with the emotional information in presetting database is compared, And acoustic information acoustic information corresponding with the emotional information in presetting database is compared;Screening and object to be detected The emotional information matched, wherein emotional information is used to indicate robot and executes respective operations;Interaction side is adjusted according to emotional information Formula has achieved the purpose that, to micro- expression Accurate Prediction, to realize the technique effect of the accurately detection to micro- expression, and then to solve It has determined due to lacking the detection to micro- expression, judgement and corresponding processing method in the related technology, has led to interactive process Or the low technical problem of information statement accuracy during person-to-person communication.
Optionally, the environmental information acquired residing for object to be detected in step S102 includes:
Step1 acquires limbs behavior act, facial expression action and the heat of object to be detected by image identification system At least one of imaging data;
Step2, by sound recognition system acquire object to be detected sound and/or object to be detected residing for environment Background sound;Wherein, sound includes:Information entrained by tone, loudness, audible frequency and sound.
Further, optionally, image identification system includes:Micro- Expression Recognition system;Sound recognition system includes:Micro- language Say identifying system.
Optionally, image information image information corresponding with the emotional information in presetting database is carried out in step S104 Compare, and by acoustic information acoustic information corresponding with the emotional information in presetting database be compared including:
Step1, it is corresponding with image information and acoustic information to image information and acoustic information matching in the preset database Image information and acoustic information, obtain at least one set of image information and acoustic information.
Further, optionally, screening and the emotional information of object matching to be detected include in step S106:
Step1 calculates the phase of at least one set of image information and acoustic information and the image collected information and acoustic information Like degree;
Step2, extraction similarity are more than at least one set of image information and the corresponding mood letter of acoustic information of predetermined threshold value Breath, obtains the emotional information with object matching to be detected.
Further, optionally, include according to emotional information adjustment interactive mode in step S108:
Step1 carries out at least one set of image information and the corresponding emotional information of acoustic information by default mood model Processing, obtains the corresponding interactive mode of emotional information;
Step2 exports interactive mode, and acquires the environmental information that object to be detected is presently in;
Step3, according to the environmental information correction interactive mode being presently in, until obtaining friendship corresponding with object to be detected Mutual result;
Wherein, at by default mood model at least one set of image information and the corresponding emotional information of acoustic information Reason includes:Emotional information is judged or predicted by default mood model, obtains the interactive mode of corresponding emotional information.
To sum up, this programme proposes one kind based on image recognition and speech recognition technology to complete to the micro- expression of communicatee With the judgement of microlanguage, micro- expression and microlanguage are combined for the first time to judge the idea of communicatee, and by constantly adjusting Exchange way determines its true idea.
Wherein, Fig. 2 is a kind of flow diagram of the method for data processing according to the ... of the embodiment of the present invention, as shown in Fig. 2, The system includes image recognition (micro- expression) system and voice recognition (microlanguage) system, and overall execution is specifically such as Under:
1. when being exchanged with user, system acquires the image and voice information of user first, and micro- expression is extracted from image Relevant information, while microlanguage relevant information is extracted from voice.
2. according to the micro- expression and microlanguage information that judge to obtain, the mood of communicatee is judged, guess idea.
3. according to the idea of the mood and conjecture of the communicatee for judging to obtain, the mode of exchange is adjusted.
4. repeating three step of front, the true idea until determining communicatee.
Embodiment 2
Fig. 3 is the flow diagram of the device of data processing according to the ... of the embodiment of the present invention, as shown in figure 3, the device packet It includes:
Acquisition module 32, for acquiring the environmental information residing for object to be detected, wherein environmental information includes:Image is believed Breath and acoustic information;Matching module 34 is used for image information image information corresponding with the emotional information in presetting database It is compared, and acoustic information acoustic information corresponding with the emotional information in presetting database is compared;Screening module 36, for screening and the emotional information of object matching to be detected;Correction module 38, for adjusting interaction side according to emotional information Formula.
Through the above steps, it may be implemented to acquire the image information and acoustic information of object to be detected;By image information with The corresponding image information of emotional information in presetting database is compared, and by the mood in acoustic information and presetting database The corresponding acoustic information of information is compared;The emotional information of screening and object matching to be detected;It is handed over according to emotional information adjustment Mutual mode has achieved the purpose that micro- expression Accurate Prediction, to realize the technique effect of the accurately detection to micro- expression, And then solve due to lacking the detection to micro- expression, judgement and corresponding processing method in the related technology, lead to man-machine friendship The low technical problem of information statement accuracy during mutual process or person-to-person communication.
Optionally, acquisition module 32 includes:First collecting unit acquires object to be detected for passing through image identification system Limbs behavior act, facial expression action and at least one of thermal imaging data;Second collecting unit, for passing through sound Sound identifying system acquires the sound of object to be detected and/or the background sound of the environment residing for object to be detected;Wherein, sound packet It includes:Information entrained by tone, loudness, audible frequency and sound.
Further, optionally, image identification system includes:Micro- Expression Recognition system;Sound recognition system includes:Micro- language Say identifying system.
Optionally, matching module 34 includes:Matching unit, in the preset database to image information and acoustic information Matching image information corresponding with image information and acoustic information and acoustic information, obtain at least one set of image information harmony message Breath.
Further, optionally, screening module 36 includes:Computing unit, for calculating at least one set of image information harmony Message ceases the similarity with the image collected information and acoustic information;Extraction unit is more than default threshold for extracting similarity The corresponding emotional information of at least one set of image information and acoustic information of value, obtains the emotional information with object matching to be detected.
Further, optionally, correction module 38 includes:Adjustment unit, for by presetting mood model at least one Group image information and the corresponding emotional information of acoustic information are handled, and the corresponding interactive mode of emotional information is obtained;Acquisition is single Member for exporting interactive mode, and acquires the environmental information that object to be detected is presently in;Unit is corrected, for according to current Residing environmental information corrects interactive mode, until obtaining interaction results corresponding with object to be detected;Wherein, by presetting feelings Thread model carries out processing at least one set of image information and the corresponding emotional information of acoustic information:By presetting mood model Emotional information is judged or is predicted, the interactive mode of corresponding emotional information is obtained.
Embodiment 3
Another aspect according to the ... of the embodiment of the present invention provides a kind of robot, including:A kind of data shown in Fig. 3 The device of processing.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
In the above embodiment of the present invention, all emphasizes particularly on different fields to the description of each embodiment, do not have in some embodiment The part of detailed description may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, for example, the unit division, Ke Yiwei A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module It connects, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple On unit.Some or all of unit therein can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can be stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or Part steps.And storage medium above-mentioned includes:USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can to store program code Medium.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (13)

1. a kind of method of data processing, which is characterized in that including:
Acquire the environmental information residing for object to be detected, wherein the environmental information includes:Image information and acoustic information;
Described image information image information corresponding with the emotional information in presetting database is compared, and by the sound Information acoustic information corresponding with the emotional information in presetting database is compared;
The emotional information of screening and the object matching to be detected, wherein the emotional information is used to indicate robot execution pair It should operate;
Interactive mode is adjusted according to the emotional information.
2. according to the method described in claim 1, it is characterized in that, the environmental information packet acquired residing for object to be detected It includes:
Limbs behavior act, facial expression action and the thermal imaging number of the object to be detected are acquired by image identification system At least one of according to;
By sound recognition system acquire the object to be detected sound and/or the object to be detected residing for environment the back of the body Scape sound;Wherein, the sound includes:Information entrained by tone, loudness, audible frequency and sound.
3. according to the method described in claim 2, it is characterized in that, described image identifying system includes:Micro- Expression Recognition system; The sound recognition system includes:Microlanguage identifying system.
4. according to the method described in claim 1, it is characterized in that, the feelings by described image information and presetting database The corresponding image information of thread information is compared, and by acoustic information sound corresponding with the emotional information in presetting database Message breath be compared including:
To described image information and acoustic information matching and described image information and the sound in the presetting database Message ceases corresponding image information and acoustic information, obtains at least one set of image information and acoustic information.
5. according to the method described in claim 4, it is characterized in that, the screening and the mood of the object matching to be detected are believed Breath includes:
Calculate at least one set of image information and acoustic information and collected described image information and the acoustic information Similarity;
At least one set of image information and acoustic information corresponding emotional information of the similarity more than predetermined threshold value are extracted, Obtain the emotional information with the object matching to be detected.
6. according to the method described in claim 5, it is characterized in that, described adjust interactive mode packet according to the emotional information It includes:
At least one set of image information and the corresponding emotional information of acoustic information are handled by default mood model, obtained To the corresponding interactive mode of the emotional information;
The interactive mode is exported, and acquires the environmental information that the object to be detected is presently in;
The environmental information being presently according to described in corrects the interactive mode, until obtaining corresponding with the object to be detected Interaction results;
Wherein, it is described by default mood model at least one set of image information and the corresponding emotional information of acoustic information into Row is handled:The emotional information is judged or predicted by the default mood model, obtains corresponding to the mood The interactive mode of information.
7. a kind of device of data processing, which is characterized in that including:
Acquisition module, for acquiring the environmental information residing for object to be detected, wherein the environmental information includes:Image information And acoustic information;
Matching module, for comparing described image information image information corresponding with the emotional information in presetting database It is right, and acoustic information acoustic information corresponding with the emotional information in presetting database is compared;
Screening module, for screening and the emotional information of the object matching to be detected;
Correction module, for adjusting interactive mode according to the emotional information.
8. device according to claim 7, which is characterized in that the acquisition module includes:
First collecting unit, for the limbs behavior act for acquiring the object to be detected by image identification system, facial table At least one of feelings action and thermal imaging data;
Second collecting unit, sound for acquiring the object to be detected by sound recognition system and/or described to be detected The background sound of environment residing for object;Wherein, the sound includes:Entrained by tone, loudness, audible frequency and sound Information.
9. device according to claim 8, which is characterized in that described image identifying system includes:Micro- Expression Recognition system; The sound recognition system includes:Microlanguage identifying system.
10. device according to claim 7, which is characterized in that the matching module includes:
Matching unit is used in the presetting database to described image information and acoustic information matching and described image Information and the corresponding image information of the acoustic information and acoustic information, obtain at least one set of image information and acoustic information.
11. device according to claim 10, which is characterized in that the screening module includes:
Computing unit, for calculating at least one set of image information and acoustic information and collected described image information and institute State the similarity of acoustic information;
Extraction unit, for extracting at least one set of image information and acoustic information pair of the similarity more than predetermined threshold value The emotional information answered obtains the emotional information with the object matching to be detected.
12. according to the devices described in claim 11, which is characterized in that the correction module includes:
Adjustment unit, for being believed at least one set of image information and the corresponding mood of acoustic information by default mood model Breath is handled, and the corresponding interactive mode of the emotional information is obtained;
Collecting unit for exporting the interactive mode, and acquires the environmental information that the object to be detected is presently in;
Unit is corrected, the environmental information for being presently according to described in corrects the interactive mode, until obtaining waiting for described Detect the corresponding interaction results of object;
Wherein, it is described by default mood model at least one set of image information and the corresponding emotional information of acoustic information into Row is handled:The emotional information is judged or predicted by the default mood model, obtains corresponding to the mood The interactive mode of information.
13. a kind of robot, which is characterized in that including:The device of data processing described in any one of claim 7 to 12.
CN201611149247.5A 2016-12-13 2016-12-13 The method, apparatus and robot of data processing Pending CN108614987A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611149247.5A CN108614987A (en) 2016-12-13 2016-12-13 The method, apparatus and robot of data processing
PCT/CN2017/092036 WO2018107731A1 (en) 2016-12-13 2017-07-06 Data processing method and device, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611149247.5A CN108614987A (en) 2016-12-13 2016-12-13 The method, apparatus and robot of data processing

Publications (1)

Publication Number Publication Date
CN108614987A true CN108614987A (en) 2018-10-02

Family

ID=62559409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611149247.5A Pending CN108614987A (en) 2016-12-13 2016-12-13 The method, apparatus and robot of data processing

Country Status (2)

Country Link
CN (1) CN108614987A (en)
WO (1) WO2018107731A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284727A (en) * 2018-10-08 2019-01-29 上海思依暄机器人科技股份有限公司 A kind of method and device of robot environment-identification
CN111176503A (en) * 2019-12-16 2020-05-19 珠海格力电器股份有限公司 Interactive system setting method and device and storage medium
CN111353034A (en) * 2020-02-28 2020-06-30 重庆百事得大牛机器人有限公司 Legal fact correction system and method based on gesture collection
CN113435338A (en) * 2021-06-28 2021-09-24 平安科技(深圳)有限公司 Voting classification method and device, electronic equipment and readable storage medium
CN113661036A (en) * 2019-04-16 2021-11-16 索尼集团公司 Information processing apparatus, information processing method, and program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110267052B (en) * 2019-06-19 2021-04-16 云南大学 Intelligent barrage robot based on real-time emotion feedback

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103617A (en) * 2009-12-22 2011-06-22 华为终端有限公司 Method and device for acquiring expression meanings
CN102355527A (en) * 2011-07-22 2012-02-15 深圳市无线开锋科技有限公司 Mood induction apparatus of mobile phone and method thereof
US20130337420A1 (en) * 2012-06-19 2013-12-19 International Business Machines Corporation Recognition and Feedback of Facial and Vocal Emotions
CN104504112A (en) * 2014-12-30 2015-04-08 何业文 Cinema information acquisition system
CN105046238A (en) * 2015-08-17 2015-11-11 华侨大学 Facial expression robot multi-channel information emotion expression mapping method
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN105279494A (en) * 2015-10-23 2016-01-27 上海斐讯数据通信技术有限公司 Human-computer interaction system, method and equipment capable of regulating user emotion
CN105559804A (en) * 2015-12-23 2016-05-11 上海矽昌通信技术有限公司 Mood manager system based on multiple monitoring
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot
CN106203344A (en) * 2016-07-12 2016-12-07 北京光年无限科技有限公司 A kind of Emotion identification method and system for intelligent robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570496B (en) * 2016-11-22 2019-10-01 上海智臻智能网络科技股份有限公司 Emotion identification method and apparatus and intelligent interactive method and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103617A (en) * 2009-12-22 2011-06-22 华为终端有限公司 Method and device for acquiring expression meanings
CN102355527A (en) * 2011-07-22 2012-02-15 深圳市无线开锋科技有限公司 Mood induction apparatus of mobile phone and method thereof
US20130337420A1 (en) * 2012-06-19 2013-12-19 International Business Machines Corporation Recognition and Feedback of Facial and Vocal Emotions
CN104504112A (en) * 2014-12-30 2015-04-08 何业文 Cinema information acquisition system
CN105046238A (en) * 2015-08-17 2015-11-11 华侨大学 Facial expression robot multi-channel information emotion expression mapping method
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN105279494A (en) * 2015-10-23 2016-01-27 上海斐讯数据通信技术有限公司 Human-computer interaction system, method and equipment capable of regulating user emotion
CN105559804A (en) * 2015-12-23 2016-05-11 上海矽昌通信技术有限公司 Mood manager system based on multiple monitoring
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot
CN106203344A (en) * 2016-07-12 2016-12-07 北京光年无限科技有限公司 A kind of Emotion identification method and system for intelligent robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284727A (en) * 2018-10-08 2019-01-29 上海思依暄机器人科技股份有限公司 A kind of method and device of robot environment-identification
CN113661036A (en) * 2019-04-16 2021-11-16 索尼集团公司 Information processing apparatus, information processing method, and program
CN111176503A (en) * 2019-12-16 2020-05-19 珠海格力电器股份有限公司 Interactive system setting method and device and storage medium
CN111353034A (en) * 2020-02-28 2020-06-30 重庆百事得大牛机器人有限公司 Legal fact correction system and method based on gesture collection
CN113435338A (en) * 2021-06-28 2021-09-24 平安科技(深圳)有限公司 Voting classification method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2018107731A1 (en) 2018-06-21

Similar Documents

Publication Publication Date Title
CN108614987A (en) The method, apparatus and robot of data processing
CN108235770B (en) Image identification method and cloud system
CN107633207A (en) AU characteristic recognition methods, device and storage medium
CN107609493B (en) Method and device for optimizing human face image quality evaluation model
CN102890776B (en) The method that expression figure explanation is transferred by facial expression
CN107679447A (en) Facial characteristics point detecting method, device and storage medium
CN107633204A (en) Face occlusion detection method, apparatus and storage medium
CN104409080B (en) Sound end detecting method and device
CN107862292A (en) Personage's mood analysis method, device and storage medium
CN104951807B (en) The determination method and apparatus of stock market's mood
KR20170091318A (en) Authentication apparatus and method based on electrocardiographic signals
CN110741387B (en) Face recognition method and device, storage medium and electronic equipment
CN102890777B (en) The computer system of recognizable facial expression
CN105095415A (en) Method and apparatus for confirming network emotion
CN109192225A (en) The method and device of speech emotion recognition and mark
CN110705584A (en) Emotion recognition method, emotion recognition device, computer device and storage medium
CN110287918A (en) Vivo identification method and Related product
CN107153811A (en) Know method for distinguishing, apparatus and system for multi-modal biological characteristic
CN107067022B (en) Method, device and equipment for establishing image classification model
CN109785123A (en) A kind of business handling assisted method, device and terminal device
CN108509034A (en) Electronic device, information processing method and related product
CN104966109B (en) Medical laboratory single image sorting technique and device
Abidin et al. Enhanced LBP texture features from time frequency representations for acoustic scene classification
CN108564067A (en) The Threshold and system of face alignment
Dang et al. Dynamic multi-rater gaussian mixture regression incorporating temporal dependencies of emotion uncertainty using kalman filters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181002

RJ01 Rejection of invention patent application after publication