CN205750354U - A kind of expression robot - Google Patents

A kind of expression robot Download PDF

Info

Publication number
CN205750354U
CN205750354U CN201620426948.8U CN201620426948U CN205750354U CN 205750354 U CN205750354 U CN 205750354U CN 201620426948 U CN201620426948 U CN 201620426948U CN 205750354 U CN205750354 U CN 205750354U
Authority
CN
China
Prior art keywords
expression
human face
face expression
unit
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201620426948.8U
Other languages
Chinese (zh)
Inventor
郑必义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Kangdrui Supply Chain Management Co Ltd
Original Assignee
Shenzhen Jinle Intelligent Health Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinle Intelligent Health Technology Co Ltd filed Critical Shenzhen Jinle Intelligent Health Technology Co Ltd
Priority to CN201620426948.8U priority Critical patent/CN205750354U/en
Application granted granted Critical
Publication of CN205750354U publication Critical patent/CN205750354U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

nullThis utility model provides a kind of expression robot,Its expression collecting unit gathers human face expression control information and is sent to host computer,The human face expression received control information is sent to processing unit by host computer,Processing unit controls information according to the human face expression received and mates in sample expression data base,Control, with human face expression, the human face expression that information is mutually matched to identify,And recognition result is sent to control unit,Control unit receives the recognition result that processing unit sends,And generate control action unit and make and control, with human face expression, the control signal of human face expression action that information is mutually matched and be sent to motor unit,So as motor unit control signal,Make and control, with human face expression, the human face expression action that information is mutually matched,This utility model can identify user's face expression action fast and accurately,Make interactive process more vivid and interesting.

Description

A kind of expression robot
Technical field
This utility model relates to robotics, particularly relates to a kind of expression robot.
Background technology
Expression robot is a kind of energy simulating human facial expression and the intelligent robot of emotion action, and it takes as one Business robot, has important function for realizing man-machine interaction particularly affective interaction, due to hommization, the feelings of expression robot The feature helped to change so that the research of expression robot has a very wide range of applications prospect, in recent years, increasing research aircraft Structure and organized the research of robot with humanoid facial expression.
Different from traditional industrial robot, expression robot is higher for interactivity, intelligent and autonomy requirement, Its research relate to Machine Design, automatically control, the multi-field knowledge such as computer intelligence, psychology and Cognitive Science, there is typical case Multi-crossed disciplines feature, how the multi-field knowledge of integrated use becomes a key issue of emotional expression, it addition, expression machine One of Research Challenges of device people is to complete Design of Mechanical Structure in the confined space, and global design has narrow space, motion Scope is little, motion is accurate and loads little feature, and the action of integrated model simultaneously to be coordinated, and meets human tau, the motion of face Rule, it is desirable to motion transmission is wanted accurately, it is impossible to distortion, this is also the problem that this problem is wanted to solve emphatically.
At present, some American-European and Japanese universities and research institution are achieved with certain achievement in research, and it is developed Robot focuses on mechanical stability, has complicated head construction and face epidermis and skin texture is true to nature.
Although above-mentioned existing expression robot has been basically completed similar to people in shape, but internal mechanical structure is multiple Miscellaneous, quality is excessive, involves great expense, and unsightly, and driving means majority selects direct current generator or steering wheel, at narrow and small head to profile Portion's structure controls mechanism's weave in of different loci so that motion transmission is inharmonious, thus eventually affects robot Overall appearance and head movement effect, and current expression robot research field, most research work still emphasis around The structure design aspect of head, interactivity is poor, only takes into account man-machine interaction in a certain respect, does not possess multichannel affective interaction With the ability controlled, the intellectual technologies such as vision Expression analysis, speech recognition are applied to expression robot the most not for how Foot so that robot cannot identify and the facial expression action reproducing user fast and effectively, and real-time, interactive sex expression is the best.
In view of this, it is necessary to a kind of expression robot interactive system being capable of with true man's real-time interactive is provided.
Utility model content
The purpose of this utility model is to overcome above-mentioned the deficiencies in the prior art, it is provided that one is capable of real-time with true man Interactive expression robot, it is intended to solve existing expression robot due to interactivity poor so that expression robot cannot be quick The technical problem of the effective facial expression action identified and reproduce user.
This utility model is achieved in that a kind of expression robot, and described expression robot includes a fixed mount and sets Put the expression robot body on described fixed mount, described fixed mount is installed with multiple stepper motor and respectively with described Multiple rack pinion groups that stepper motor connects, described expression robot body includes shell system and is separately positioned on Two eyebrow rotatable parts, two eyelid rotatable parts, two Rotation of eyeball parts and mandibular movement parts, institute on described shell system State expression robot body be provided be connected with described rack and pinion drive mechanism respectively for controlling described eyebrow rotation section Part, eyelid rotatable parts, Rotation of eyeball parts and multiple driving steel wires of mandibular movement parts motion, described expression robot is also The processing unit being connected with described host computer including the expression host computer that is connected with described expression unit of collecting unit, respectively with Sample expression data base that described processing unit connects and control unit and the action module unit that is connected with described control unit; Wherein,
Described expression collecting unit, is used for gathering human face expression and controls information, and the human face expression collected is controlled letter Breath is sent to described host computer, and wherein, described human face expression controls information and includes the control that Facial Expression Image, user input The operational order of human face expression or the voice messaging controlling human face expression of user input;
Described host computer, the face that the described expression collecting unit sent for receiving described expression collecting unit collects Expression control information, and the human face expression that collected by the described expression collecting unit received controls information and is sent to described place Reason unit;
Described processing unit, the human face expression that the described expression collecting unit sent for receiving described host computer collects Control information, and the human face expression control information that the described expression collecting unit that ought receive the transmission of described host computer collects Time, described processing unit controls information at described sample according to the human face expression that the described expression collecting unit received collects Expression data mates in storehouse, in order to identify the human face expression control information collected with described expression collecting unit mutual The human face expression of coupling, and recognition result is sent to described control unit;
Described control unit is for receiving the recognition result that described processing unit sends and single receiving described process During the recognition result that unit sends, generate control action unit and make the human face expression control collected with described expression collecting unit The control signal of the human face expression action that information is mutually matched is sent to motor unit;
Described motor unit, the control action unit sent for receiving described control unit is made and human face expression control The control signal of the human face expression action that information is mutually matched, and when receiving the control action unit that described control unit sends When making the control signal controlling the human face expression action that information is mutually matched with human face expression, make and control letter with human face expression The human face expression action that breath is mutually matched.
Further, the man-machine interaction unit being connected with described sample expression data base, described man-machine interaction list are also included Unit is for the sample Facial Expression Image increased newly according to the input of user, and newly-increased sample Facial Expression Image is sent to institute State sample expression data base, in order to described sample expression data base preserves newly-increased sample Facial Expression Image.
Further, described expression collecting unit includes the first expression collecting unit, the second expression collecting unit and the 3rd Expression collecting unit;
Described first expression collecting unit, is used for gathering Facial Expression Image, and when collecting Facial Expression Image, institute State the first expression collecting unit and the Facial Expression Image collected is sent to described host computer;
Described second expression collecting unit, for gathering the operational order controlling human face expression of user input, and works as When collecting the operational order controlling human face expression of user input, the use that described second expression collecting unit will collect The operational order controlling human face expression of person's input is sent to described host computer;
Described 3rd expression collecting unit, for gathering the voice messaging controlling human face expression of user input, and works as When collecting the voice messaging controlling human face expression of user input, the use that described 3rd expression collecting unit will collect The voice messaging controlling human face expression of person's input is sent to described host computer.
Further, described processing unit includes the first processing unit, the second processing unit and the 3rd processing unit;
Described first processing unit is connected with described host computer, and the described expression sent for receiving described host computer gathers The human face expression that unit collects controls information, and works as the human face expression control information behaviour that described expression collecting unit collects During face facial expression image, Facial Expression Image is mated in described sample expression data base by described first processing unit, with Just identify the human face expression being mutually matched with Facial Expression Image, and identify the face being mutually matched with Facial Expression Image Expression is sent to described control unit;
Described second processing unit is connected with described host computer, and the described expression sent for receiving described host computer gathers Human face expression that unit collects controls information, and the human face expression collected when described expression collecting unit controls information for control During the operational order of human face expression processed, the operational order controlling human face expression is expressed one's feelings by described second processing unit at described sample Data base is mated, in order to identify and control the human face expression that the operational order of human face expression is mutually matched, and identifying Go out the human face expression being mutually matched with Facial Expression Image and be sent to described control unit;
Described 3rd processing unit is connected with described host computer, and the described expression sent for receiving described host computer gathers Human face expression that unit collects controls information, and the human face expression collected when described expression collecting unit controls information for control During the voice messaging of human face expression processed, the voice messaging controlling human face expression is expressed one's feelings by described 3rd processing unit at described sample Data base is mated, in order to identify and control the human face expression that the voice messaging of human face expression is mutually matched, and identifying Go out the human face expression being mutually matched with Facial Expression Image and be sent to described control unit.
Further, described control unit includes the first control unit, the second control unit and the 3rd control unit;
Described first control unit is connected with handled unit, for receiving described first processing unit sends and face The human face expression that facial expression image is mutually matched, and receiving the mutual with Facial Expression Image of described first processing unit transmission During the human face expression mated, the control instruction of the human face expression action that generation and Facial Expression Image are mutually matched is sent to described Motor unit;
Described second control unit is connected with handled unit, for receive described second processing unit send with control The human face expression that the operational order of human face expression is mutually matched, and receive described second processing unit send with control people During the human face expression that the operational order of face expression is mutually matched, generate and control the people that the operational order of human face expression is mutually matched The control instruction of face expression action is sent to described motor unit;
Described 3rd control unit is connected with handled unit, for receive described 3rd processing unit send with control The human face expression that the voice messaging of human face expression is mutually matched, and receive described 3rd processing unit send with control people During the human face expression that the voice messaging of face expression is mutually matched, generate and control the people that the voice messaging of human face expression is mutually matched The control instruction of face expression action is sent to described motor unit.
Further, described Facial Expression Image includes the Facial Expression Image that left eyebrow moves up and down, left eyebrow rotary motion Facial Expression Image, right eyebrow move up and down Facial Expression Image table, the Facial Expression Image of right eyebrow rotary motion, left upper eyelid The Facial Expression Image of the Facial Expression Image of motion, left lower eyelid motion, the Facial Expression Image of right upper lid motion, right lower eyelid fortune Dynamic Facial Expression Image, the Facial Expression Image of eyeball horizontal movement and the Facial Expression Image of mandibular movement;
The operational order controlling human face expression of described user input includes the behaviour of the human face expression that left eyebrow moves up and down Instruct, the operational order of human face expression of left eyebrow rotary motion, right eyebrow move up and down the operational order of human face expression, right eyebrow The operational order of human face expression of the operational order of the human face expression of rotary motion, left upper eyelid motion, the face of left lower eyelid motion Expression operational order, right upper lid motion the operational order of human face expression, right lower eyelid motion human face expression operational order, The operational order of the human face expression of eyeball horizontal movement and the operational order of the human face expression of mandibular movement;
The voice controlling human face expression of described user input includes that the voice of the human face expression that left eyebrow moves up and down is believed The voice messaging of human face expression, right eyebrow that breath, the voice messaging of human face expression of left eyebrow rotary motion, right eyebrow move up and down rotate The voice messaging of human face expression of the voice messaging of human face expression of motion, left upper eyelid motion, the human face expression of left lower eyelid motion Voice messaging, right upper lid motion the voice messaging of human face expression, right lower eyelid motion the voice messaging of human face expression, eyeball The voice messaging of the human face expression of horizontal movement and the voice messaging of the human face expression of mandibular movement.
Further, the human face expression control information collected with described expression collecting unit that described motor unit is made The human face expression action being mutually matched specifically includes human face expression action, the face table of left eyebrow rotary motion that left eyebrow moves up and down Feelings action, the human face expression action of right eyebrow up and down motion, the human face expression action of right eyebrow rotary motion, the face of left upper eyelid motion Expression action, the human face expression action of left lower eyelid motion, the human face expression action of right upper lid motion, the face table of right lower eyelid motion Feelings action, the human face expression action of eyeball horizontal movement and the human face expression action of mandibular movement.
Further, described first expression collecting unit is shooting induction installation, the second expression collecting unit is touch screen, 3rd expression collecting unit is mike.
The beneficial effects of the utility model: the expression robot that this utility model provides, the expression of this expression robot is adopted Collection unit gathers human face expression and controls information, and the human face expression collected control information is sent to host computer, and host computer will The human face expression received controls information and is sent to processing unit, and processing unit receives the human face expression control letter that host computer sends Breath, and mate in sample expression data base according to the human face expression control information received, in order to identify and face The human face expression that expression control information is mutually matched, and recognition result is sent to control unit, control unit reception processes single The recognition result that unit sends, and it is dynamic to generate the human face expression that control action unit is made with human face expression control information is mutually matched The control signal made is sent to motor unit, in order to motor unit control signal, makes and controls mutual of information with human face expression The human face expression action joined, thus expression robot is carried out head construction and simplifies and improve, decrease junction point, improve Space availability ratio, simplifies the degree of freedom quantity realizing expression, and robot runs more steadily, it is simpler to install and operate Single, there is the interactive mode of human facial expression recognition, speech recognition and operational order, it is possible to identify user's face table fast and accurately Feelings action, makes interactive process more vivid and interesting.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical solution of the utility model, below by the accompanying drawing used required in embodiment It is briefly described, it should be apparent that, the accompanying drawing in describing below is only embodiments more of the present utility model, for ability From the point of view of the those of ordinary skill of territory, on the premise of not paying creative work, it is also possible to obtain the attached of other according to these accompanying drawings Figure.
Fig. 1 is the structural representation of the expression robot that this utility model the second embodiment provides.
Fig. 2 is the expression robot operation principle schematic diagram that this utility model first embodiment provides.
Detailed description of the invention
In order to make this utility model be solved the technical problem that, technical scheme and beneficial effect clearer, below In conjunction with drawings and Examples, this utility model is further elaborated.Should be appreciated that described herein being embodied as Example, only in order to explain this utility model, is not used to limit this utility model.
As depicted in figs. 1 and 2, this utility model embodiment provides a kind of expression robot, described expression robot bag Include a fixed mount 2 and the expression robot body 1 being arranged on fixed mount 2, described fixed mount 2 is installed with multiple motor Group 21 and the multiple rack pinion groups 22 being connected with described stepper motor 21 respectively, described expression robot body 1 wraps Include shell system 11 and two eyebrow rotatable parts 12, the two eyelid rotatable parts 13 being separately positioned on described shell system 11, Two Rotation of eyeball parts 14 and mandibular movement parts 15, on described expression robot body 1 be provided with respectively with described rack-and-pinion What drive mechanism connected is used for controlling described eyebrow rotatable parts 12, eyelid rotatable parts 13, Rotation of eyeball parts 14 and lower jaw Moving component 15 motion multiple driving steel wires (not indicating in figure), described expression robot also include express one's feelings collecting unit 3 and The processing unit 5 that the host computer 4 that described expression unit 3 connects is connected with described host computer 4, respectively with described processing unit 5 even Sample expression data base 6 and the control unit 7 and connect the 7 action module unit 8 connect with described control unit connect;Wherein,
Described expression collecting unit 3, is used for gathering human face expression and controls information, and the human face expression collected is controlled letter Breath is sent to described host computer 4, and wherein, described human face expression controls information and includes the control that Facial Expression Image, user input The operational order of human face expression processed or the voice messaging controlling human face expression of user input;
Described host computer 4, the described expression collecting unit 3 sent for receiving described expression collecting unit 3 collects Human face expression controls information, and the human face expression collected by the described expression collecting unit 3 received controls information and is sent to Described processing unit 5;
Described processing unit 5, the face that the described expression collecting unit 3 sent for receiving described host computer 4 collects Expression control information, and the human face expression control that the described expression collecting unit 3 that ought receive the transmission of described host computer 4 collects During information, described processing unit 5 controls information in institute according to the human face expression that the described expression collecting unit 3 received collects State in sample expression data base 6 and mate, in order to identify the human face expression collected with described expression collecting unit 3 and control The human face expression that information is mutually matched, and recognition result is sent to described control unit 7;
Described control unit 7, for receiving the recognition result that described processing unit 6 sends, and is receiving described process During the recognition result that unit 6 sends, generate control action unit 8 and make the face table collected with described expression collecting unit 3 Feelings control the control signal of the human face expression action that information is mutually matched and are sent to motor unit 8;
Described motor unit 8, the control action unit 8 sent for receiving described control unit 7 is made and human face expression The control signal of the human face expression action that control information is mutually matched, and when the control receiving the transmission of described control unit 7 is moved When unit makes the control signal controlling the human face expression action that information is mutually matched with human face expression, make and human face expression The human face expression action that control information is mutually matched.
Further, the man-machine interaction unit 9 being connected with described sample expression data base 6, described man-machine interaction are also included Unit 9 is for the sample Facial Expression Image increased newly according to the input of user, and is sent by newly-increased sample Facial Expression Image To described sample expression data base, in order to described sample expression data base 6 preserves newly-increased sample Facial Expression Image.
Further, described expression collecting unit 3 include the first expression collecting unit 31, second express one's feelings collecting unit 32 and 3rd expression collecting unit 33;
Described first expression collecting unit 31, is used for gathering Facial Expression Image, and when collecting Facial Expression Image, The Facial Expression Image collected is sent to described host computer 4 by described first expression collecting unit 31;
Described second expression collecting unit 32, for gathering the operational order controlling human face expression of user input, and When collecting the operational order controlling human face expression of user input, described second expression collecting unit 32 will collect The operational order controlling human face expression of user input is sent to described host computer 4;
Described 3rd expression collecting unit 33, for gathering the voice messaging controlling human face expression of user input, and When collecting the voice messaging controlling human face expression of user input, described 3rd expression collecting unit 33 will collect The voice messaging controlling human face expression of user input is sent to described host computer 4.
Further, described processing unit 5 includes the first processing unit the 51, second processing unit 52 and the 3rd processing unit 53;
Described first processing unit 51 is connected with described host computer 4, for receiving the described expression that described host computer 4 sends The human face expression that collecting unit 3 collects controls information, and works as the human face expression control letter that described expression collecting unit 3 collects When breath is for Facial Expression Image, Facial Expression Image is entered in described sample expression data base 6 by described first processing unit 51 Row coupling, in order to identify the human face expression being mutually matched with Facial Expression Image, and identify mutual with Facial Expression Image The human face expression of coupling is sent to described control unit 7;
Described second processing unit 52 is connected with described host computer 4, for receiving the described expression that described host computer 4 sends The human face expression that collecting unit 3 collects controls information, and works as the human face expression control letter that described expression collecting unit 3 collects During the operational order that breath is control human face expression, described second processing unit 52 is to controlling the operational order of human face expression described Sample expression data base 6 is mated, in order to identify and control the face table that the operational order of human face expression is mutually matched Feelings, and identify the human face expression being mutually matched with Facial Expression Image and be sent to described control unit 7;
Described 3rd processing unit 53 is connected with described host computer 4, for receiving the described expression that described host computer 4 sends The human face expression that collecting unit 3 collects controls information, and works as the human face expression control letter that described expression collecting unit 3 collects During the voice messaging that breath is control human face expression, described 3rd processing unit 53 is to controlling the voice messaging of human face expression described Sample expression data base 6 is mated, in order to identify and control the face table that the voice messaging of human face expression is mutually matched Feelings, and identify the human face expression being mutually matched with Facial Expression Image and be sent to described control unit 7.
Further, described control unit 7 includes the first control unit the 71, second control unit 72 and the 3rd control unit 73;
Described first control unit 71 is connected with handled unit 5, for receiving what described first processing unit 51 sent The human face expression being mutually matched with Facial Expression Image, and receiving described first processing unit 51 sends and human face expression During the human face expression that image is mutually matched, the control instruction generating the human face expression action being mutually matched with Facial Expression Image is sent out Deliver to described motor unit 8;
Described second control unit 72 is connected with handled unit 5, for receiving what described second processing unit 52 sent The human face expression being mutually matched with the operational order controlling human face expression, and receiving what described second processing unit 52 sent With control human face expression operational order be mutually matched human face expression time, generate mutual with the operational order controlling human face expression The control instruction of the human face expression action of coupling is sent to described motor unit 8;
Described 3rd control unit 73 is connected with handled unit 5, for receiving what described 3rd processing unit 53 sent The human face expression being mutually matched with the voice messaging controlling human face expression, and receiving what described 3rd processing unit 53 sent With control human face expression voice messaging be mutually matched human face expression time, generate mutual with the voice messaging controlling human face expression The control instruction of the human face expression action of coupling is sent to described motor unit 8.
Preferably, in this utility model embodiment, described Facial Expression Image includes the human face expression that left eyebrow moves up and down The Facial Expression Image table of image, the Facial Expression Image of left eyebrow rotary motion, right eyebrow up and down motion, the people of right eyebrow rotary motion The Facial Expression Image of face facial expression image, left upper eyelid motion, the Facial Expression Image of left lower eyelid motion, the face of right upper lid motion Facial Expression Image, the Facial Expression Image of eyeball horizontal movement and the face of mandibular movement of facial expression image, right lower eyelid motion Facial expression image;
The operational order controlling human face expression of described user input includes the behaviour of the human face expression that left eyebrow moves up and down Instruct, the operational order of human face expression of left eyebrow rotary motion, right eyebrow move up and down the operational order of human face expression, right eyebrow The operational order of human face expression of the operational order of the human face expression of rotary motion, left upper eyelid motion, the face of left lower eyelid motion Expression operational order, right upper lid motion the operational order of human face expression, right lower eyelid motion human face expression operational order, The operational order of the human face expression of eyeball horizontal movement and the operational order of the human face expression of mandibular movement;
The voice controlling human face expression of described user input includes that the voice of the human face expression that left eyebrow moves up and down is believed The voice messaging of human face expression, right eyebrow that breath, the voice messaging of human face expression of left eyebrow rotary motion, right eyebrow move up and down rotate The voice messaging of human face expression of the voice messaging of human face expression of motion, left upper eyelid motion, the human face expression of left lower eyelid motion Voice messaging, right upper lid motion the voice messaging of human face expression, right lower eyelid motion the voice messaging of human face expression, eyeball The voice messaging of the human face expression of horizontal movement and the voice messaging of the human face expression of mandibular movement.
Further, the human face expression control letter collected with described expression collecting unit 3 that described motor unit 8 is made The human face expression action that breath is mutually matched specifically includes human face expression action, the face of left eyebrow rotary motion that left eyebrow moves up and down Expression action, the human face expression action of right eyebrow up and down motion, the human face expression action of right eyebrow rotary motion, the people of left upper eyelid motion Face expression action, the human face expression action of left lower eyelid motion, the human face expression action of right upper lid motion, the face of right lower eyelid motion Expression action, the human face expression action of eyeball horizontal movement and the human face expression action of mandibular movement.
Further, described first expression collecting unit 31 be shooting induction installation, the second expression collecting unit 32 be tactile Touching screen, the 3rd expression collecting unit 33 is mike.
It should be noted that the photographic head in the expression robot interactive system of this utility model offer can be by general Universal serial bus (Universal Serial BUS, USB) is connected with expression robot, or photographic head can also pass through bluetooth, The wireless modes such as infrared ray are connected with terminal unit, and this utility model embodiment is to the deployment between expression robot and photographic head Mode and connected mode are not especially limited, as long as annexation essence exists.
It should be noted that the expression robot that this utility model provides, its robot body framework 1 includes shell system 11 and two eyebrow rotatable parts 12, two eyelid rotatable parts 13, two Rotation of eyeballs that are separately positioned on described shell system 11 Parts 14 and mandibular movement parts 15, this utility model is from expression robot overall structure, realizes not affecting expression On the premise of effect, to described eyebrow rotatable parts 12, eyelid rotatable parts 13, Rotation of eyeball parts 14 and mandibular movement parts 15 these four main face module re-start structure design, use 11 road motor linkages to control, accurately realize 11 The control of degree of freedom, corresponding as follows:
(1) left eyebrow: up and down motion degree of freedom (two motor in synchrony);
(2) left eyebrow: rotary freedom (two motors are reverse);
(3) right eyebrow: up and down motion degree of freedom (two motor in synchrony);
(4) right eyebrow: rotary freedom (two motors are reverse);
(5) left upper eyelid: up and down motion degree of freedom;
(6) left lower eyelid: up and down motion degree of freedom;
(7) right upper lid: up and down motion degree of freedom;
(8) right lower eyelid: up and down motion degree of freedom;
(9) left and right eyeball: up and down motion degree of freedom;
(10) left and right eyeball: side-to-side movement degree of freedom;
(11) lower jaw: up and down motion degree of freedom.
Wherein, on power, the selection higher motor of precision is as driving, originally by drive system from robot simultaneously Body frame separates, and by gear drive, power is reached driving steel wire, and each face module of final drive realizes free motion.
The beneficial effects of the utility model: the expression robot that this utility model provides, the expression of this expression robot is adopted Collection unit gathers human face expression and controls information, and the human face expression collected control information is sent to host computer, and host computer will The human face expression received controls information and is sent to processing unit, and processing unit receives the human face expression control letter that host computer sends Breath, and mate in sample expression data base according to the human face expression control information received, in order to identify and face The human face expression that expression control information is mutually matched, and recognition result is sent to control unit, control unit reception processes single The recognition result that unit sends, and it is dynamic to generate the human face expression that control action unit is made with human face expression control information is mutually matched The control signal made is sent to motor unit, in order to motor unit control signal, makes and controls mutual of information with human face expression The human face expression action joined, thus expression robot is carried out head construction and simplifies and improve, decrease junction point, improve Space availability ratio, simplifies the degree of freedom quantity realizing expression, and robot runs more steadily, it is simpler to install and operate Single, there is the interactive mode of human facial expression recognition, speech recognition and operational order, it is possible to identify user's face table fast and accurately Feelings action, makes interactive process more vivid and interesting.
The above is preferred implementation of the present utility model, it is noted that for the ordinary skill of the art For personnel, on the premise of without departing from this utility model principle, it is also possible to make some improvements and modifications, these improve and profit Decorations are also considered as protection domain of the present utility model.

Claims (8)

1. an expression robot, described expression robot includes a fixed mount and the expression machine being arranged on described fixed mount Human body, described fixed mount is installed with multiple stepper motor and the multiple gear teeth being connected with described stepper motor respectively Bar transmission group, described expression robot body includes that shell system and two eyebrows being separately positioned on described shell system rotate Parts, two eyelid rotatable parts, two Rotation of eyeball parts and mandibular movement parts, described expression robot body is provided with respectively Be connected with described rack and pinion drive mechanism for controlling described eyebrow rotatable parts, eyelid rotatable parts, Rotation of eyeball portion Part and multiple driving steel wires of mandibular movement parts motion, it is characterised in that it is single that described expression robot also includes that expression gathers The processing unit that the host computer that unit is connected with described expression unit is connected with described host computer, respectively with described processing unit even The sample expression data base connect and control unit and the action module unit that is connected with described control unit;Wherein,
Described expression collecting unit, is used for gathering human face expression and controls information, and the human face expression collected control information sent out Giving described host computer, wherein, described human face expression controls information and includes the control face that Facial Expression Image, user input The operational order of expression or the voice messaging controlling human face expression of user input;
Described host computer, the human face expression that the described expression collecting unit sent for receiving described expression collecting unit collects Control information, and human face expression that the described expression collecting unit received is collected control information be sent to described process single Unit;
Described processing unit, the human face expression that the described expression collecting unit sent for receiving described host computer collects controls Information, and when the human face expression that the described expression collecting unit receiving the transmission of described host computer collects controls information, institute State the human face expression that processing unit collects according to the described expression collecting unit received to control information and express one's feelings at described sample Data base is mated, in order to identify the human face expression control information collected with described expression collecting unit and be mutually matched Human face expression, and recognition result is sent to described control unit;
Described control unit, for receiving the recognition result that described processing unit sends, and sends out receiving described processing unit During the recognition result sent, generate control action unit and make the human face expression control information collected with described expression collecting unit The control signal of the human face expression being mutually matched is sent to motor unit;
Described motor unit, the control action unit sent for receiving described control unit is made and is controlled information with human face expression The control signal of the human face expression action being mutually matched, and when the control action unit receiving the transmission of described control unit is made When controlling the control signal of the human face expression action that information is mutually matched with human face expression, make and control information with human face expression The human face expression action being mutually matched.
A kind of expression robot the most according to claim 1, it is characterised in that also include and described sample expression data base The man-machine interaction unit connected, described man-machine interaction unit is used for the sample Facial Expression Image newly-increased according to the input of user, And newly-increased sample Facial Expression Image is sent to described sample expression data base, in order to described sample expression data base preserve Newly-increased sample Facial Expression Image.
A kind of expression robot the most according to claim 1, it is characterised in that described expression collecting unit includes the first table Feelings collecting unit, the second expression collecting unit and the 3rd expression collecting unit;
Described first expression collecting unit, is used for gathering Facial Expression Image, and when collecting Facial Expression Image, and described the The Facial Expression Image collected is sent to described host computer by one expression collecting unit;
Described second expression collecting unit, for gathering the operational order controlling human face expression of user input, and when gathering During to the operational order of the control human face expression of user input, described second expression collecting unit is by defeated for the user collected The operational order controlling human face expression entered is sent to described host computer;
Described 3rd expression collecting unit, for gathering the voice messaging controlling human face expression of user input, and when gathering During to the voice messaging of the control human face expression of user input, described 3rd expression collecting unit is by defeated for the user collected The voice messaging controlling human face expression entered is sent to described host computer.
A kind of expression robot the most according to claim 1, it is characterised in that described processing unit includes the first process list Unit, the second processing unit and the 3rd processing unit;
Described first processing unit is connected with described host computer, for receiving the described expression collecting unit that described host computer sends The human face expression that collects controls information, and the human face expression collected when described expression collecting unit to control information be face table During feelings image, Facial Expression Image is mated in described sample expression data base by described first processing unit, in order to know Do not go out the human face expression being mutually matched with Facial Expression Image, and identify the human face expression being mutually matched with Facial Expression Image It is sent to described control unit;
Described second processing unit is connected with described host computer, for receiving the described expression collecting unit that described host computer sends The human face expression that collects controls information, and the human face expression collected when described expression collecting unit controls information for controlling people During the operational order that face is expressed one's feelings, described second processing unit is to controlling the operational order of human face expression at described sample expression data Storehouse is mated, in order to identify and control the human face expression that the operational order of human face expression is mutually matched, and identify with The human face expression that Facial Expression Image is mutually matched is sent to described control unit;
Described 3rd processing unit is connected with described host computer, for receiving the described expression collecting unit that described host computer sends The human face expression that collects controls information, and the human face expression collected when described expression collecting unit controls information for controlling people During the voice messaging that face is expressed one's feelings, described 3rd processing unit is to controlling the voice messaging of human face expression at described sample expression data Storehouse is mated, in order to identify and control the human face expression that the voice messaging of human face expression is mutually matched, and identify with The human face expression that Facial Expression Image is mutually matched is sent to described control unit.
A kind of expression robot the most according to claim 4, it is characterised in that described control unit includes the first control list Unit, the second control unit and the 3rd control unit;
Described first control unit is connected with handled unit, for receiving described first processing unit sends and human face expression The human face expression that image is mutually matched, and receiving being mutually matched with Facial Expression Image of described first processing unit transmission Human face expression time, generate and the control instruction of human face expression action that Facial Expression Image is mutually matched be sent to described action Unit;
Described second control unit is connected with handled unit, for receive described second processing unit send with control face The human face expression that is mutually matched of operational order of expression, and receive that described second processing unit sends with control face table During the human face expression that the operational order of feelings is mutually matched, generate and control the face table that the operational order of human face expression is mutually matched The control instruction of feelings action is sent to described motor unit;
Described 3rd control unit is connected with handled unit, for receive described 3rd processing unit send with control face The human face expression that is mutually matched of voice messaging of expression, and receive that described 3rd processing unit sends with control face table During the human face expression that the voice messaging of feelings is mutually matched, generate and control the face table that the voice messaging of human face expression is mutually matched The control instruction of feelings action is sent to described motor unit.
A kind of expression robot the most according to claim 1, it is characterised in that described Facial Expression Image includes on left eyebrow The Facial Expression Image of lower motion, the Facial Expression Image of left eyebrow rotary motion, right eyebrow move up and down Facial Expression Image table, The Facial Expression Image of right eyebrow rotary motion, left upper eyelid motion Facial Expression Image, left lower eyelid motion Facial Expression Image, The Facial Expression Image of right upper lid motion, the Facial Expression Image of right lower eyelid motion, the Facial Expression Image of eyeball horizontal movement And the Facial Expression Image of mandibular movement;
The operational order controlling human face expression of described user input includes that the operation of the human face expression that left eyebrow moves up and down refers to Make, the operational order of human face expression that the operational order of human face expression of left eyebrow rotary motion, right eyebrow move up and down, right eyebrow revolve The dynamic operational order of human face expression of transhipment, the operational order of human face expression of left upper eyelid motion, the face table of left lower eyelid motion The operational order of human face expression of the operational order of feelings, right upper lid motion, the operational order of human face expression of right lower eyelid motion, eye The operational order of the human face expression of ball horizontal movement and the operational order of the human face expression of mandibular movement;
The voice controlling human face expression of described user input includes the voice messaging of human face expression, the left side that left eyebrow moves up and down The voice messaging of human face expression that the voice messaging of the human face expression of eyebrow rotary motion, right eyebrow move up and down, right eyebrow rotary motion The voice messaging of human face expression, the voice messaging of human face expression of left upper eyelid motion, the language of human face expression of left lower eyelid motion Message breath, the voice messaging of human face expression of right upper lid motion, the voice messaging of human face expression of right lower eyelid motion, eyeball level The voice messaging of the human face expression of motion and the voice messaging of the human face expression of mandibular movement.
A kind of expression robot the most according to claim 1, it is characterised in that described motor unit is that make with described table The human face expression action that the human face expression control information that feelings collecting unit collects is mutually matched specifically includes left eyebrow and moves up and down Human face expression action, the human face expression action of left eyebrow rotary motion, right eyebrow move up and down human face expression action, right eyebrow rotate The human face expression action of motion, the human face expression action of left upper eyelid motion, the human face expression action of left lower eyelid motion, right upper lid fortune Dynamic human face expression action, the human face expression action of right lower eyelid motion, the human face expression action of eyeball horizontal movement and lower jaw fortune Dynamic human face expression action.
A kind of expression robot the most according to claim 3, it is characterised in that described first expression collecting unit is shooting Induction installation, the second expression collecting unit are touch screen, and the 3rd expression collecting unit is mike.
CN201620426948.8U 2016-05-12 2016-05-12 A kind of expression robot Expired - Fee Related CN205750354U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201620426948.8U CN205750354U (en) 2016-05-12 2016-05-12 A kind of expression robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201620426948.8U CN205750354U (en) 2016-05-12 2016-05-12 A kind of expression robot

Publications (1)

Publication Number Publication Date
CN205750354U true CN205750354U (en) 2016-11-30

Family

ID=57367542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201620426948.8U Expired - Fee Related CN205750354U (en) 2016-05-12 2016-05-12 A kind of expression robot

Country Status (1)

Country Link
CN (1) CN205750354U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106625678A (en) * 2016-12-30 2017-05-10 首都师范大学 Robot expression control method and device
CN106695843A (en) * 2017-03-22 2017-05-24 海南职业技术学院 Interactive robot capable of imitating human facial expressions
CN107330418A (en) * 2017-07-12 2017-11-07 深圳市铂越科技有限公司 A kind of man-machine interaction method, robot system and storage medium
CN110977994A (en) * 2019-11-07 2020-04-10 山东大未来人工智能研究院有限公司 Intelligent robot with facial expression communication function

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106625678A (en) * 2016-12-30 2017-05-10 首都师范大学 Robot expression control method and device
CN106625678B (en) * 2016-12-30 2017-12-08 首都师范大学 robot expression control method and device
CN106695843A (en) * 2017-03-22 2017-05-24 海南职业技术学院 Interactive robot capable of imitating human facial expressions
CN107330418A (en) * 2017-07-12 2017-11-07 深圳市铂越科技有限公司 A kind of man-machine interaction method, robot system and storage medium
CN107330418B (en) * 2017-07-12 2021-06-01 深圳市铂越科技有限公司 Robot system
CN110977994A (en) * 2019-11-07 2020-04-10 山东大未来人工智能研究院有限公司 Intelligent robot with facial expression communication function

Similar Documents

Publication Publication Date Title
CN205721625U (en) A kind of expression robot interactive system
CN205750354U (en) A kind of expression robot
CN103324100B (en) A kind of emotion on-vehicle machines people of information-driven
CN101474481B (en) Emotional robot system
CN106737760B (en) Human-type intelligent robot and human-computer communication system
CN108983636B (en) Man-machine intelligent symbiotic platform system
CN103853071B (en) Man-machine facial expression interactive system based on bio signal
CN101436037A (en) Dining room service robot system
CN102699914A (en) Robot
CN110236879B (en) Exoskeleton rehabilitation training mechanical arm and voice interaction system thereof
CN105437247A (en) Expression robot
CN102500113A (en) Comprehensive greeting robot based on smart phone interaction
CN106313072A (en) Humanoid robot based on leap motion of Kinect
CN101618280A (en) Humanoid-head robot device with human-computer interaction function and behavior control method thereof
Wang et al. Human-centered, ergonomic wearable device with computer vision augmented intelligence for VR multimodal human-smart home object interaction
CN103777636A (en) Idiodynamic video trolley system based on wifi communication
CN110716578A (en) Aircraft control system based on hybrid brain-computer interface and control method thereof
Maheux et al. T-Top, a SAR experimental platform
CN206484561U (en) A kind of intelligent domestic is accompanied and attended to robot
Wang et al. Coordinated control of an intelligentwheelchair based on a brain-computer interface and speech recognition
CN110405794A (en) It is a kind of to embrace robot and its control method for children
CN205969049U (en) Guest -meeting robot
HemaMalini et al. Eye and voice controlled wheel chair
Jean et al. Development of an office delivery robot with multimodal human-robot interactions
CN113334397B (en) Emotion recognition entity robot device

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190529

Address after: 516007 3rd floor of Kaizhong Zhihui Park, No. 8 Huaan Road, Zhongkai High-tech Zone, Huizhou City, Guangdong Province

Patentee after: Guangdong Kangdrui Supply Chain Management Co., Ltd.

Address before: 518000 B, 2, 3 Floors (South 2nd Floor) of 1 Building, Huianda Industrial Park, Tangtou Community, Tangtou Avenue, Shiyan Street, Baoan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN JINLE INTELLIGENT HEALTH TECHNOLOGY CO., LTD.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20161130

Termination date: 20210512

CF01 Termination of patent right due to non-payment of annual fee