A kind of expression robot
Technical field
This utility model relates to robotics, particularly relates to a kind of expression robot.
Background technology
Expression robot is a kind of energy simulating human facial expression and the intelligent robot of emotion action, and it takes as one
Business robot, has important function for realizing man-machine interaction particularly affective interaction, due to hommization, the feelings of expression robot
The feature helped to change so that the research of expression robot has a very wide range of applications prospect, in recent years, increasing research aircraft
Structure and organized the research of robot with humanoid facial expression.
Different from traditional industrial robot, expression robot is higher for interactivity, intelligent and autonomy requirement,
Its research relate to Machine Design, automatically control, the multi-field knowledge such as computer intelligence, psychology and Cognitive Science, there is typical case
Multi-crossed disciplines feature, how the multi-field knowledge of integrated use becomes a key issue of emotional expression, it addition, expression machine
One of Research Challenges of device people is to complete Design of Mechanical Structure in the confined space, and global design has narrow space, motion
Scope is little, motion is accurate and loads little feature, and the action of integrated model simultaneously to be coordinated, and meets human tau, the motion of face
Rule, it is desirable to motion transmission is wanted accurately, it is impossible to distortion, this is also the problem that this problem is wanted to solve emphatically.
At present, some American-European and Japanese universities and research institution are achieved with certain achievement in research, and it is developed
Robot focuses on mechanical stability, has complicated head construction and face epidermis and skin texture is true to nature.
Although above-mentioned existing expression robot has been basically completed similar to people in shape, but internal mechanical structure is multiple
Miscellaneous, quality is excessive, involves great expense, and unsightly, and driving means majority selects direct current generator or steering wheel, at narrow and small head to profile
Portion's structure controls mechanism's weave in of different loci so that motion transmission is inharmonious, thus eventually affects robot
Overall appearance and head movement effect, and current expression robot research field, most research work still emphasis around
The structure design aspect of head, interactivity is poor, only takes into account man-machine interaction in a certain respect, does not possess multichannel affective interaction
With the ability controlled, the intellectual technologies such as vision Expression analysis, speech recognition are applied to expression robot the most not for how
Foot so that robot cannot identify and the facial expression action reproducing user fast and effectively, and real-time, interactive sex expression is the best.
In view of this, it is necessary to a kind of expression robot interactive system being capable of with true man's real-time interactive is provided.
Utility model content
The purpose of this utility model is to overcome above-mentioned the deficiencies in the prior art, it is provided that one is capable of real-time with true man
Interactive expression robot, it is intended to solve existing expression robot due to interactivity poor so that expression robot cannot be quick
The technical problem of the effective facial expression action identified and reproduce user.
This utility model is achieved in that a kind of expression robot, and described expression robot includes a fixed mount and sets
Put the expression robot body on described fixed mount, described fixed mount is installed with multiple stepper motor and respectively with described
Multiple rack pinion groups that stepper motor connects, described expression robot body includes shell system and is separately positioned on
Two eyebrow rotatable parts, two eyelid rotatable parts, two Rotation of eyeball parts and mandibular movement parts, institute on described shell system
State expression robot body be provided be connected with described rack and pinion drive mechanism respectively for controlling described eyebrow rotation section
Part, eyelid rotatable parts, Rotation of eyeball parts and multiple driving steel wires of mandibular movement parts motion, described expression robot is also
The processing unit being connected with described host computer including the expression host computer that is connected with described expression unit of collecting unit, respectively with
Sample expression data base that described processing unit connects and control unit and the action module unit that is connected with described control unit;
Wherein,
Described expression collecting unit, is used for gathering human face expression and controls information, and the human face expression collected is controlled letter
Breath is sent to described host computer, and wherein, described human face expression controls information and includes the control that Facial Expression Image, user input
The operational order of human face expression or the voice messaging controlling human face expression of user input;
Described host computer, the face that the described expression collecting unit sent for receiving described expression collecting unit collects
Expression control information, and the human face expression that collected by the described expression collecting unit received controls information and is sent to described place
Reason unit;
Described processing unit, the human face expression that the described expression collecting unit sent for receiving described host computer collects
Control information, and the human face expression control information that the described expression collecting unit that ought receive the transmission of described host computer collects
Time, described processing unit controls information at described sample according to the human face expression that the described expression collecting unit received collects
Expression data mates in storehouse, in order to identify the human face expression control information collected with described expression collecting unit mutual
The human face expression of coupling, and recognition result is sent to described control unit;
Described control unit is for receiving the recognition result that described processing unit sends and single receiving described process
During the recognition result that unit sends, generate control action unit and make the human face expression control collected with described expression collecting unit
The control signal of the human face expression action that information is mutually matched is sent to motor unit;
Described motor unit, the control action unit sent for receiving described control unit is made and human face expression control
The control signal of the human face expression action that information is mutually matched, and when receiving the control action unit that described control unit sends
When making the control signal controlling the human face expression action that information is mutually matched with human face expression, make and control letter with human face expression
The human face expression action that breath is mutually matched.
Further, the man-machine interaction unit being connected with described sample expression data base, described man-machine interaction list are also included
Unit is for the sample Facial Expression Image increased newly according to the input of user, and newly-increased sample Facial Expression Image is sent to institute
State sample expression data base, in order to described sample expression data base preserves newly-increased sample Facial Expression Image.
Further, described expression collecting unit includes the first expression collecting unit, the second expression collecting unit and the 3rd
Expression collecting unit;
Described first expression collecting unit, is used for gathering Facial Expression Image, and when collecting Facial Expression Image, institute
State the first expression collecting unit and the Facial Expression Image collected is sent to described host computer;
Described second expression collecting unit, for gathering the operational order controlling human face expression of user input, and works as
When collecting the operational order controlling human face expression of user input, the use that described second expression collecting unit will collect
The operational order controlling human face expression of person's input is sent to described host computer;
Described 3rd expression collecting unit, for gathering the voice messaging controlling human face expression of user input, and works as
When collecting the voice messaging controlling human face expression of user input, the use that described 3rd expression collecting unit will collect
The voice messaging controlling human face expression of person's input is sent to described host computer.
Further, described processing unit includes the first processing unit, the second processing unit and the 3rd processing unit;
Described first processing unit is connected with described host computer, and the described expression sent for receiving described host computer gathers
The human face expression that unit collects controls information, and works as the human face expression control information behaviour that described expression collecting unit collects
During face facial expression image, Facial Expression Image is mated in described sample expression data base by described first processing unit, with
Just identify the human face expression being mutually matched with Facial Expression Image, and identify the face being mutually matched with Facial Expression Image
Expression is sent to described control unit;
Described second processing unit is connected with described host computer, and the described expression sent for receiving described host computer gathers
Human face expression that unit collects controls information, and the human face expression collected when described expression collecting unit controls information for control
During the operational order of human face expression processed, the operational order controlling human face expression is expressed one's feelings by described second processing unit at described sample
Data base is mated, in order to identify and control the human face expression that the operational order of human face expression is mutually matched, and identifying
Go out the human face expression being mutually matched with Facial Expression Image and be sent to described control unit;
Described 3rd processing unit is connected with described host computer, and the described expression sent for receiving described host computer gathers
Human face expression that unit collects controls information, and the human face expression collected when described expression collecting unit controls information for control
During the voice messaging of human face expression processed, the voice messaging controlling human face expression is expressed one's feelings by described 3rd processing unit at described sample
Data base is mated, in order to identify and control the human face expression that the voice messaging of human face expression is mutually matched, and identifying
Go out the human face expression being mutually matched with Facial Expression Image and be sent to described control unit.
Further, described control unit includes the first control unit, the second control unit and the 3rd control unit;
Described first control unit is connected with handled unit, for receiving described first processing unit sends and face
The human face expression that facial expression image is mutually matched, and receiving the mutual with Facial Expression Image of described first processing unit transmission
During the human face expression mated, the control instruction of the human face expression action that generation and Facial Expression Image are mutually matched is sent to described
Motor unit;
Described second control unit is connected with handled unit, for receive described second processing unit send with control
The human face expression that the operational order of human face expression is mutually matched, and receive described second processing unit send with control people
During the human face expression that the operational order of face expression is mutually matched, generate and control the people that the operational order of human face expression is mutually matched
The control instruction of face expression action is sent to described motor unit;
Described 3rd control unit is connected with handled unit, for receive described 3rd processing unit send with control
The human face expression that the voice messaging of human face expression is mutually matched, and receive described 3rd processing unit send with control people
During the human face expression that the voice messaging of face expression is mutually matched, generate and control the people that the voice messaging of human face expression is mutually matched
The control instruction of face expression action is sent to described motor unit.
Further, described Facial Expression Image includes the Facial Expression Image that left eyebrow moves up and down, left eyebrow rotary motion
Facial Expression Image, right eyebrow move up and down Facial Expression Image table, the Facial Expression Image of right eyebrow rotary motion, left upper eyelid
The Facial Expression Image of the Facial Expression Image of motion, left lower eyelid motion, the Facial Expression Image of right upper lid motion, right lower eyelid fortune
Dynamic Facial Expression Image, the Facial Expression Image of eyeball horizontal movement and the Facial Expression Image of mandibular movement;
The operational order controlling human face expression of described user input includes the behaviour of the human face expression that left eyebrow moves up and down
Instruct, the operational order of human face expression of left eyebrow rotary motion, right eyebrow move up and down the operational order of human face expression, right eyebrow
The operational order of human face expression of the operational order of the human face expression of rotary motion, left upper eyelid motion, the face of left lower eyelid motion
Expression operational order, right upper lid motion the operational order of human face expression, right lower eyelid motion human face expression operational order,
The operational order of the human face expression of eyeball horizontal movement and the operational order of the human face expression of mandibular movement;
The voice controlling human face expression of described user input includes that the voice of the human face expression that left eyebrow moves up and down is believed
The voice messaging of human face expression, right eyebrow that breath, the voice messaging of human face expression of left eyebrow rotary motion, right eyebrow move up and down rotate
The voice messaging of human face expression of the voice messaging of human face expression of motion, left upper eyelid motion, the human face expression of left lower eyelid motion
Voice messaging, right upper lid motion the voice messaging of human face expression, right lower eyelid motion the voice messaging of human face expression, eyeball
The voice messaging of the human face expression of horizontal movement and the voice messaging of the human face expression of mandibular movement.
Further, the human face expression control information collected with described expression collecting unit that described motor unit is made
The human face expression action being mutually matched specifically includes human face expression action, the face table of left eyebrow rotary motion that left eyebrow moves up and down
Feelings action, the human face expression action of right eyebrow up and down motion, the human face expression action of right eyebrow rotary motion, the face of left upper eyelid motion
Expression action, the human face expression action of left lower eyelid motion, the human face expression action of right upper lid motion, the face table of right lower eyelid motion
Feelings action, the human face expression action of eyeball horizontal movement and the human face expression action of mandibular movement.
Further, described first expression collecting unit is shooting induction installation, the second expression collecting unit is touch screen,
3rd expression collecting unit is mike.
The beneficial effects of the utility model: the expression robot that this utility model provides, the expression of this expression robot is adopted
Collection unit gathers human face expression and controls information, and the human face expression collected control information is sent to host computer, and host computer will
The human face expression received controls information and is sent to processing unit, and processing unit receives the human face expression control letter that host computer sends
Breath, and mate in sample expression data base according to the human face expression control information received, in order to identify and face
The human face expression that expression control information is mutually matched, and recognition result is sent to control unit, control unit reception processes single
The recognition result that unit sends, and it is dynamic to generate the human face expression that control action unit is made with human face expression control information is mutually matched
The control signal made is sent to motor unit, in order to motor unit control signal, makes and controls mutual of information with human face expression
The human face expression action joined, thus expression robot is carried out head construction and simplifies and improve, decrease junction point, improve
Space availability ratio, simplifies the degree of freedom quantity realizing expression, and robot runs more steadily, it is simpler to install and operate
Single, there is the interactive mode of human facial expression recognition, speech recognition and operational order, it is possible to identify user's face table fast and accurately
Feelings action, makes interactive process more vivid and interesting.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical solution of the utility model, below by the accompanying drawing used required in embodiment
It is briefly described, it should be apparent that, the accompanying drawing in describing below is only embodiments more of the present utility model, for ability
From the point of view of the those of ordinary skill of territory, on the premise of not paying creative work, it is also possible to obtain the attached of other according to these accompanying drawings
Figure.
Fig. 1 is the structural representation of the expression robot that this utility model the second embodiment provides.
Fig. 2 is the expression robot operation principle schematic diagram that this utility model first embodiment provides.
Detailed description of the invention
In order to make this utility model be solved the technical problem that, technical scheme and beneficial effect clearer, below
In conjunction with drawings and Examples, this utility model is further elaborated.Should be appreciated that described herein being embodied as
Example, only in order to explain this utility model, is not used to limit this utility model.
As depicted in figs. 1 and 2, this utility model embodiment provides a kind of expression robot, described expression robot bag
Include a fixed mount 2 and the expression robot body 1 being arranged on fixed mount 2, described fixed mount 2 is installed with multiple motor
Group 21 and the multiple rack pinion groups 22 being connected with described stepper motor 21 respectively, described expression robot body 1 wraps
Include shell system 11 and two eyebrow rotatable parts 12, the two eyelid rotatable parts 13 being separately positioned on described shell system 11,
Two Rotation of eyeball parts 14 and mandibular movement parts 15, on described expression robot body 1 be provided with respectively with described rack-and-pinion
What drive mechanism connected is used for controlling described eyebrow rotatable parts 12, eyelid rotatable parts 13, Rotation of eyeball parts 14 and lower jaw
Moving component 15 motion multiple driving steel wires (not indicating in figure), described expression robot also include express one's feelings collecting unit 3 and
The processing unit 5 that the host computer 4 that described expression unit 3 connects is connected with described host computer 4, respectively with described processing unit 5 even
Sample expression data base 6 and the control unit 7 and connect the 7 action module unit 8 connect with described control unit connect;Wherein,
Described expression collecting unit 3, is used for gathering human face expression and controls information, and the human face expression collected is controlled letter
Breath is sent to described host computer 4, and wherein, described human face expression controls information and includes the control that Facial Expression Image, user input
The operational order of human face expression processed or the voice messaging controlling human face expression of user input;
Described host computer 4, the described expression collecting unit 3 sent for receiving described expression collecting unit 3 collects
Human face expression controls information, and the human face expression collected by the described expression collecting unit 3 received controls information and is sent to
Described processing unit 5;
Described processing unit 5, the face that the described expression collecting unit 3 sent for receiving described host computer 4 collects
Expression control information, and the human face expression control that the described expression collecting unit 3 that ought receive the transmission of described host computer 4 collects
During information, described processing unit 5 controls information in institute according to the human face expression that the described expression collecting unit 3 received collects
State in sample expression data base 6 and mate, in order to identify the human face expression collected with described expression collecting unit 3 and control
The human face expression that information is mutually matched, and recognition result is sent to described control unit 7;
Described control unit 7, for receiving the recognition result that described processing unit 6 sends, and is receiving described process
During the recognition result that unit 6 sends, generate control action unit 8 and make the face table collected with described expression collecting unit 3
Feelings control the control signal of the human face expression action that information is mutually matched and are sent to motor unit 8;
Described motor unit 8, the control action unit 8 sent for receiving described control unit 7 is made and human face expression
The control signal of the human face expression action that control information is mutually matched, and when the control receiving the transmission of described control unit 7 is moved
When unit makes the control signal controlling the human face expression action that information is mutually matched with human face expression, make and human face expression
The human face expression action that control information is mutually matched.
Further, the man-machine interaction unit 9 being connected with described sample expression data base 6, described man-machine interaction are also included
Unit 9 is for the sample Facial Expression Image increased newly according to the input of user, and is sent by newly-increased sample Facial Expression Image
To described sample expression data base, in order to described sample expression data base 6 preserves newly-increased sample Facial Expression Image.
Further, described expression collecting unit 3 include the first expression collecting unit 31, second express one's feelings collecting unit 32 and
3rd expression collecting unit 33;
Described first expression collecting unit 31, is used for gathering Facial Expression Image, and when collecting Facial Expression Image,
The Facial Expression Image collected is sent to described host computer 4 by described first expression collecting unit 31;
Described second expression collecting unit 32, for gathering the operational order controlling human face expression of user input, and
When collecting the operational order controlling human face expression of user input, described second expression collecting unit 32 will collect
The operational order controlling human face expression of user input is sent to described host computer 4;
Described 3rd expression collecting unit 33, for gathering the voice messaging controlling human face expression of user input, and
When collecting the voice messaging controlling human face expression of user input, described 3rd expression collecting unit 33 will collect
The voice messaging controlling human face expression of user input is sent to described host computer 4.
Further, described processing unit 5 includes the first processing unit the 51, second processing unit 52 and the 3rd processing unit
53;
Described first processing unit 51 is connected with described host computer 4, for receiving the described expression that described host computer 4 sends
The human face expression that collecting unit 3 collects controls information, and works as the human face expression control letter that described expression collecting unit 3 collects
When breath is for Facial Expression Image, Facial Expression Image is entered in described sample expression data base 6 by described first processing unit 51
Row coupling, in order to identify the human face expression being mutually matched with Facial Expression Image, and identify mutual with Facial Expression Image
The human face expression of coupling is sent to described control unit 7;
Described second processing unit 52 is connected with described host computer 4, for receiving the described expression that described host computer 4 sends
The human face expression that collecting unit 3 collects controls information, and works as the human face expression control letter that described expression collecting unit 3 collects
During the operational order that breath is control human face expression, described second processing unit 52 is to controlling the operational order of human face expression described
Sample expression data base 6 is mated, in order to identify and control the face table that the operational order of human face expression is mutually matched
Feelings, and identify the human face expression being mutually matched with Facial Expression Image and be sent to described control unit 7;
Described 3rd processing unit 53 is connected with described host computer 4, for receiving the described expression that described host computer 4 sends
The human face expression that collecting unit 3 collects controls information, and works as the human face expression control letter that described expression collecting unit 3 collects
During the voice messaging that breath is control human face expression, described 3rd processing unit 53 is to controlling the voice messaging of human face expression described
Sample expression data base 6 is mated, in order to identify and control the face table that the voice messaging of human face expression is mutually matched
Feelings, and identify the human face expression being mutually matched with Facial Expression Image and be sent to described control unit 7.
Further, described control unit 7 includes the first control unit the 71, second control unit 72 and the 3rd control unit
73;
Described first control unit 71 is connected with handled unit 5, for receiving what described first processing unit 51 sent
The human face expression being mutually matched with Facial Expression Image, and receiving described first processing unit 51 sends and human face expression
During the human face expression that image is mutually matched, the control instruction generating the human face expression action being mutually matched with Facial Expression Image is sent out
Deliver to described motor unit 8;
Described second control unit 72 is connected with handled unit 5, for receiving what described second processing unit 52 sent
The human face expression being mutually matched with the operational order controlling human face expression, and receiving what described second processing unit 52 sent
With control human face expression operational order be mutually matched human face expression time, generate mutual with the operational order controlling human face expression
The control instruction of the human face expression action of coupling is sent to described motor unit 8;
Described 3rd control unit 73 is connected with handled unit 5, for receiving what described 3rd processing unit 53 sent
The human face expression being mutually matched with the voice messaging controlling human face expression, and receiving what described 3rd processing unit 53 sent
With control human face expression voice messaging be mutually matched human face expression time, generate mutual with the voice messaging controlling human face expression
The control instruction of the human face expression action of coupling is sent to described motor unit 8.
Preferably, in this utility model embodiment, described Facial Expression Image includes the human face expression that left eyebrow moves up and down
The Facial Expression Image table of image, the Facial Expression Image of left eyebrow rotary motion, right eyebrow up and down motion, the people of right eyebrow rotary motion
The Facial Expression Image of face facial expression image, left upper eyelid motion, the Facial Expression Image of left lower eyelid motion, the face of right upper lid motion
Facial Expression Image, the Facial Expression Image of eyeball horizontal movement and the face of mandibular movement of facial expression image, right lower eyelid motion
Facial expression image;
The operational order controlling human face expression of described user input includes the behaviour of the human face expression that left eyebrow moves up and down
Instruct, the operational order of human face expression of left eyebrow rotary motion, right eyebrow move up and down the operational order of human face expression, right eyebrow
The operational order of human face expression of the operational order of the human face expression of rotary motion, left upper eyelid motion, the face of left lower eyelid motion
Expression operational order, right upper lid motion the operational order of human face expression, right lower eyelid motion human face expression operational order,
The operational order of the human face expression of eyeball horizontal movement and the operational order of the human face expression of mandibular movement;
The voice controlling human face expression of described user input includes that the voice of the human face expression that left eyebrow moves up and down is believed
The voice messaging of human face expression, right eyebrow that breath, the voice messaging of human face expression of left eyebrow rotary motion, right eyebrow move up and down rotate
The voice messaging of human face expression of the voice messaging of human face expression of motion, left upper eyelid motion, the human face expression of left lower eyelid motion
Voice messaging, right upper lid motion the voice messaging of human face expression, right lower eyelid motion the voice messaging of human face expression, eyeball
The voice messaging of the human face expression of horizontal movement and the voice messaging of the human face expression of mandibular movement.
Further, the human face expression control letter collected with described expression collecting unit 3 that described motor unit 8 is made
The human face expression action that breath is mutually matched specifically includes human face expression action, the face of left eyebrow rotary motion that left eyebrow moves up and down
Expression action, the human face expression action of right eyebrow up and down motion, the human face expression action of right eyebrow rotary motion, the people of left upper eyelid motion
Face expression action, the human face expression action of left lower eyelid motion, the human face expression action of right upper lid motion, the face of right lower eyelid motion
Expression action, the human face expression action of eyeball horizontal movement and the human face expression action of mandibular movement.
Further, described first expression collecting unit 31 be shooting induction installation, the second expression collecting unit 32 be tactile
Touching screen, the 3rd expression collecting unit 33 is mike.
It should be noted that the photographic head in the expression robot interactive system of this utility model offer can be by general
Universal serial bus (Universal Serial BUS, USB) is connected with expression robot, or photographic head can also pass through bluetooth,
The wireless modes such as infrared ray are connected with terminal unit, and this utility model embodiment is to the deployment between expression robot and photographic head
Mode and connected mode are not especially limited, as long as annexation essence exists.
It should be noted that the expression robot that this utility model provides, its robot body framework 1 includes shell system
11 and two eyebrow rotatable parts 12, two eyelid rotatable parts 13, two Rotation of eyeballs that are separately positioned on described shell system 11
Parts 14 and mandibular movement parts 15, this utility model is from expression robot overall structure, realizes not affecting expression
On the premise of effect, to described eyebrow rotatable parts 12, eyelid rotatable parts 13, Rotation of eyeball parts 14 and mandibular movement parts
15 these four main face module re-start structure design, use 11 road motor linkages to control, accurately realize 11
The control of degree of freedom, corresponding as follows:
(1) left eyebrow: up and down motion degree of freedom (two motor in synchrony);
(2) left eyebrow: rotary freedom (two motors are reverse);
(3) right eyebrow: up and down motion degree of freedom (two motor in synchrony);
(4) right eyebrow: rotary freedom (two motors are reverse);
(5) left upper eyelid: up and down motion degree of freedom;
(6) left lower eyelid: up and down motion degree of freedom;
(7) right upper lid: up and down motion degree of freedom;
(8) right lower eyelid: up and down motion degree of freedom;
(9) left and right eyeball: up and down motion degree of freedom;
(10) left and right eyeball: side-to-side movement degree of freedom;
(11) lower jaw: up and down motion degree of freedom.
Wherein, on power, the selection higher motor of precision is as driving, originally by drive system from robot simultaneously
Body frame separates, and by gear drive, power is reached driving steel wire, and each face module of final drive realizes free motion.
The beneficial effects of the utility model: the expression robot that this utility model provides, the expression of this expression robot is adopted
Collection unit gathers human face expression and controls information, and the human face expression collected control information is sent to host computer, and host computer will
The human face expression received controls information and is sent to processing unit, and processing unit receives the human face expression control letter that host computer sends
Breath, and mate in sample expression data base according to the human face expression control information received, in order to identify and face
The human face expression that expression control information is mutually matched, and recognition result is sent to control unit, control unit reception processes single
The recognition result that unit sends, and it is dynamic to generate the human face expression that control action unit is made with human face expression control information is mutually matched
The control signal made is sent to motor unit, in order to motor unit control signal, makes and controls mutual of information with human face expression
The human face expression action joined, thus expression robot is carried out head construction and simplifies and improve, decrease junction point, improve
Space availability ratio, simplifies the degree of freedom quantity realizing expression, and robot runs more steadily, it is simpler to install and operate
Single, there is the interactive mode of human facial expression recognition, speech recognition and operational order, it is possible to identify user's face table fast and accurately
Feelings action, makes interactive process more vivid and interesting.
The above is preferred implementation of the present utility model, it is noted that for the ordinary skill of the art
For personnel, on the premise of without departing from this utility model principle, it is also possible to make some improvements and modifications, these improve and profit
Decorations are also considered as protection domain of the present utility model.