CN103679203A - Robot system and method for detecting human face and recognizing emotion - Google Patents

Robot system and method for detecting human face and recognizing emotion Download PDF

Info

Publication number
CN103679203A
CN103679203A CN201310694112.7A CN201310694112A CN103679203A CN 103679203 A CN103679203 A CN 103679203A CN 201310694112 A CN201310694112 A CN 201310694112A CN 103679203 A CN103679203 A CN 103679203A
Authority
CN
China
Prior art keywords
expression
face
people
expressive features
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310694112.7A
Other languages
Chinese (zh)
Other versions
CN103679203B (en
Inventor
蔡则苏
王丙祥
王玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU JIUXIANG AUTOMOBILE APPLIANCE GROUP CO Ltd
Original Assignee
JIANGSU JIUXIANG AUTOMOBILE APPLIANCE GROUP CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU JIUXIANG AUTOMOBILE APPLIANCE GROUP CO Ltd filed Critical JIANGSU JIUXIANG AUTOMOBILE APPLIANCE GROUP CO Ltd
Priority to CN201310694112.7A priority Critical patent/CN103679203B/en
Publication of CN103679203A publication Critical patent/CN103679203A/en
Application granted granted Critical
Publication of CN103679203B publication Critical patent/CN103679203B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a robot system and method for detecting a human face and recognizing emotion. The system comprises a human face expression library collecting module, an original expression library building module, a feature library rebuilding module, a field expression feature extracting module and an expression recognizing module. The human face expression library collecting module is used for collecting a large number of human face expression color image frames through a video collecting device and processing the human face expression color image frames to form a human face expression library. The original expression library building module is used for extracting expression features after removing image redundant information of training images in the human face expression library to form an original expression feature library. The feature library rebuilding module is used for rebuilding the original expression feature library as a structuralized hash table through the distance hash method. The field expression feature extracting module is used for collecting field human face expression color image frames through the video collecting device and extracting field expression features. The expression recognizing module is used for recognizing the human face expression through the k neighbor sorting algorithm in the feature library in which the field expression features extracted by field expression feature extracting module are rebuilt.

Description

People's face of robot detects and emotion recognition system and method
Technical field
The present invention relates to intelligent robot technology field, the people's face that particularly relates to a kind of robot for intelligent robot detects and emotion recognition system and method.
Background technology
The recognition of face of existing home services robot is limited in one's ability, especially emotion recognition ability is very limited, cannot carry out omnibearing identification to old man in family or children's affective state, such as Chinese patent application CN200720077448.9 has described a kind of intelligent robot with recognition of face, it comprises: the robot body with thigh motor, arm motor, neck sub-motor and loudspeaker, be installed on described robot body and for absorbing the camera head of facial image, and for the facial image of described camera head picked-up and pre-stored facial image being compared to identify the face identification unit of absorbed facial image, Chinese patent application 201220365083.0 has been described a kind of safety protection robot of recognition of face, it is by safety protection robot body, be arranged on described safety protection robot body surface for absorbing the camera head of facial image, safety protection robot body interior and for the facial image of described camera head picked-up and pre-stored facial image being compared to the face recognition module of identification under being contained in, for the treatment of data and send the data processing module of instruction to other module, for sending for user, indicate so that the Remote module of safety protection robot execution security protection action forms, face recognition module in these two patented claims, the facial image only absorbing by described camera head and pre-stored facial image compare to identify absorbed facial image, but recognition capability is limited, during particularly for the more facial image of storage, cannot effectively complete the identification of people's face.
In addition, Chinese utility model patent CN201120506957.5 provides a kind of and has helped the elderly and disabled aiding robot, it comprises the single camera vision system that can rotate, be used for identifying daily necessities and visiting guest and obstacle object, although a pair of is for realizing the five fingers shape Apery manipulator of staff action and this is helped the elderly and has anti-theft monitoring with disabled aiding robot for the triangle coupled wheel drives structure of drive machines people walking, safety inspection, monitoring diagnosis and treatment, auxiliary walking, article carrying, family's electric control, sanitation and hygiene, home entertaining, give the correct time and waken, the functions such as children education, but nobody's face detects and the function of emotion recognition, cannot effectively identify the old man in family and child's state.
In sum, existing people's face for domestic robot application detects and emotion recognition system is mainly comprised of removable camera and fixing camera, just realize merely face identification system, intelligent degree is limited, and cannot perception other information around, can not comprehensively analyze old man and child's affective state, thereby carry out the omnibearing service of accompanying and attending to.Therefore, be necessary to propose a kind of technological means in fact, to address the above problem.
Summary of the invention
The deficiency existing for overcoming above-mentioned prior art, the present invention's fundamental purpose is to provide a kind of people's face of robot to detect and emotion recognition system and method, it makes robot become family's supervisory-controlled robot that can carry out recognition of face and emotion recognition, realize the object of accompanying and attending to that the monitoring of old man's affective state and children are provided by robot, promoted family's monitoring of household robot and the ability of accompanying and attending to.
For reaching above-mentioned and other object, people's face that the present invention proposes a kind of robot detects and emotion recognition system, at least comprises:
Face expression database acquisition module, utilize a large amount of human face expression color image frames of video acquisition device collection, by after its pre-service, utilize human-face detector, human eye detection device to carry out the pedestrian's face rotation of going forward side by side of the detection of people's face and location and human eye detection and location, finally utilize people's face geometry feature accurately to locate expression district, generate storage for the face expression database of the training plan image set of expressive features extraction;
Original expression storehouse builds module, utilizes the training image of face expression database, carries out expressive features extraction after training image is removed to image redundancy information, and expressive features is saved as to file forms original expression feature database;
Feature database reconstructed module, is used based on apart from Hash method, original expression feature database being reconstructed into structurized Hash table;
On-the-spot expressive features extraction module, from this video acquisition device collection site human face expression color image frames, carry out pre-service, utilize human-face detector, human eye detection device to carry out the detection of people's face and location and human eye detection and location, the pedestrian's face of going forward side by side rotation, utilize people's face geometry feature accurately to locate expression district, generate on-the-spot Facial Expression Image, and phenomenon Facial Expression Image is carried out to on-the-spot expressive features extraction;
Expression Recognition module, the on-the-spot expressive features that this scene expressive features module is extracted utilizes k nearest neighbor sorting algorithm to identify human face expression in the expressive features storehouse after reconstruct.
Further, this pre-service is for to change color image frames into gray level image and to use histogram equalization to carry out brightness normalization to gray level image.
Further, this original expression storehouse builds module and uses principal component analysis (PCA) dimensionality reduction to remove image redundancy information, carries out expressive features extraction.
Further, this video acquisition device is camera, is arranged at the head of this robot, and its position changes the head movement device that is controlled by this robot.
Further, this camera is arranged in the head eyeball of this robot.
For achieving the above object, the present invention also provides a kind of people's face of robot to detect and emotion identification method, comprises the steps:
Step 1, from a large amount of human face expression color image frames of video acquisition device collection, by after its pre-service, utilize human-face detector, human eye detection device to carry out the pedestrian's face rotation of going forward side by side of the detection of people's face and location and human eye detection and location, finally utilize people's face geometry feature accurately to locate expression district, generate storage for the face expression database of the training plan image set of expressive features extraction;
Step 2, utilizes the training image of face expression database, carries out expressive features extraction after removing image redundancy information, and expressive features is saved as to file forms original expression feature database;
Step 3, is used based on apart from Hash method, original expression feature database being reconstructed into structurized Hash table;
Step 4, collection site human face expression color image frames from this video acquisition device, by after its pre-service, carries out the detection of people's face, human eye detection, people's face rotates and utilizes people's face geometry feature accurately to locate expression district, generates on-the-spot human face expression;
Step 5, utilizes on-the-spot Facial Expression Image, removes image redundancy information, carries out on-the-spot expressive features extraction;
Step 6, the expressive features that scene is extracted is utilized k nearest neighbor sorting algorithm identification human face expression in the expressive features storehouse of reconstruct.
Further, this pre-service is for to change color image frames into gray level image and to use histogram equalization to carry out brightness normalization to gray level image.
Further, in step 2 and step 5, use principal component analysis (PCA) dimensionality reduction to remove after image redundancy information, carry out expressive features extraction.
Further, utilizing principal component analysis (PCA) to carry out feature extraction is that this training plan image set is converted to feature set, the corresponding eigenwert of each major component wherein, larger its corresponding major component of eigenwert is just more important, by the major component structure expressive features of eigenwert select progressively some from big to small.
Further, in step 3, the Hash table that step based on original expression feature database being reconstructed into apart from Hash method to structurized Hash table for creating a length is, is inserted into each expressive features respectively in this Hash table, using this Hash table as searching database.
Compared with prior art, people's face of a kind of robot of the present invention detects and emotion recognition system and method passes through the steps such as training expression extraction, the extraction of training expressive features, reconstruct expressive features, on-the-spot expression extraction, the extraction of on-the-spot expressive features and Expression Recognition, make robot become family's supervisory-controlled robot that can carry out recognition of face and emotion recognition, realize the object of accompanying and attending to that the monitoring of old man's affective state and children are provided by robot, promoted family's monitoring of household robot and the ability of accompanying and attending to.
Accompanying drawing explanation
Fig. 1 be robot that the present invention applied system architecture diagram;
Fig. 2 is that in the present invention's preferred embodiment, the structure of robot arranges schematic diagram;
Fig. 3 is people's face detection of a kind of robot of the present invention and the system architecture diagram of emotion recognition system;
Fig. 4 is training expression storehouse major component schematic diagram in preferred embodiment of the present invention;
Fig. 5 is training expression storehouse the average image schematic diagram in preferred embodiment of the present invention
Fig. 6 is people's face detection of a kind of robot of the present invention and the flow chart of steps of emotion identification method;
Fig. 7 is that people's face of a kind of robot of the present invention detects the experimental system interface schematic diagram adopting with emotion identification method.
Embodiment
Below, by specific instantiation accompanying drawings embodiments of the present invention, those skilled in the art can understand other advantage of the present invention and effect easily by content disclosed in the present specification.The present invention also can be implemented or be applied by other different instantiation, and the every details in this instructions also can be based on different viewpoints and application, carries out various modifications and change not deviating under spirit of the present invention.
Before the detection of people's face and emotion recognition system of introducing the present invention, first the structure of the robot that the present invention applied is described.Fig. 1 be robot that the present invention applied system architecture diagram.As shown in Figure 1, the robot that the present invention applied comprises master control system 210, telecontrol equipment 220, headwork device 230, man-machine interactive system 240, video acquisition module 250, radio receiving transmitting module 260, voice acquisition module 270, mileage measurement mechanism 280 and receiver of remote-control sytem 290, other each several part co-ordinations of master control system 210 controls, comprise hardware and software two parts, hardware is by DSP, the compositions such as MCU, can be divided into path planning module on software, detection of obstacles, people's face detects, emotion detects, environment measuring, abnormality detection, mileage information and information fusion module, people's face detects and is connected video acquisition device 1 with emotion recognition, for gathering people's face information and identification people's the expressions such as happiness, anger, grief and joy, environment sensing is connected video acquisition module 2 with abnormality detection, be mainly used in perception external environment condition and detect external environment condition abnormal information, mileage information connects mileage information measurement mechanism 280, to obtain mileage information, detection of obstacles connects ultrasonic sensor, for detection of the distance of outer barrie Wu Dao robot, information fusion is by external video acquisition module, voice acquisition module, ultrasonic sensor, electric weight detects and charging module collects information and mileage information merge, and to judge that robot should charge or to continue service, whether need avoiding barrier, whether need response calling order, telecontrol equipment 220 comprises that chassis driver, motor drive module, direct current generator M1/M2, driving wheel 1/2 and gear train, universal wheel, electric battery, ultrasonic sensor, electric weight detect and charging module, telecontrol equipment is mainly used in receiving master control system 210 orders and walks with drive machines people, complete in the process of walking ultrasonic ranging and feed back to master control system 210 to carry out subsequent control, in service process, detect in time battery electric quantity and feed back to master control system 210 and need to charge judging whether, headwork device 230 comprises head controller, steering wheel 1/2 and drive mechanism, for receiving the order of master control system 210, carry out head pitching and left rotation and right rotation, by head, driven the motion of monocular cam 1/2 of the video acquisition device 1/2 be installed on head to obtain image information, in the present invention, steering wheel 1 is responsible for the pitch rotation of control neck, steering wheel 2 is controlled the left-right rotation of head, thereby can control the rotation of eyeball (video acquisition device 1 and video acquisition device 2), man-machine interactive system 240 connects master control system 210, and it comprises the conventional input-output device such as display screen, keyboard, articulatory system, is beneficial to display device people state and artificial input information's order, video acquisition module 250 comprises video acquisition device 1 and video acquisition device 2, video acquisition device 1 comprises common monocular cam and an Acquisition Circuit thereof, being used for obtaining people's face detects and emotion recognition information needed, video acquisition device 2 comprises wide-angle monocular cam and an Acquisition Circuit thereof, be used for obtaining environment sensing and abnormality detection information needed, video acquisition module is installed on robot head, and its position change is controlled by head movement device 230, radio receiving transmitting module 260 is ordered to user (owner) transmission information and reception by communication network under master control system 210 is controlled, voice acquisition module 270 gather extraneous voice messaging to master control system 210 to judge whether voice messaging or order, mileage measurement mechanism 280 is processed for measuring walking mileage and mileage information being uploaded to master control system 210.
Fig. 2 is that in the present invention's preferred embodiment, the structure of robot arranges schematic diagram.In preferred embodiment of the present invention, master control system is by left and right arm and the head rotation of communication module control step electric machine control system control, people's face of master control system detects with emotion recognition module and obtains camera 1 the people's face information gathering and the expressions such as happiness, anger, grief and joy of identifying people by USB interface, environment measuring module is obtained environment sensing and abnormality detection information needed by USB interface, and master control system connects a plurality of Mikes (voice acquisition device) by deconcentrator and gathers extraneous voice messaging to judge whether voice messaging or order; Master control system is connected with chassis controller, by chassis controller, control slave computer control system to control left and right wheels motor movement, utilize ultrasonic detector to carry out capable of avoiding obstacles by supersonic wave path planning simultaneously, and control that electric weight detects and charging module carries out electric weight detection and automatic charging.
Fig. 3 is people's face detection of a kind of robot of the present invention and the system architecture diagram of emotion recognition system.As shown in Figure 3, people's face of a kind of robot of the present invention detects and emotion recognition system, at least comprises that face expression database acquisition module 30, original expression feature database build module 31, feature database reconstructed module 32, on-the-spot expressive features extraction module 33 and Expression Recognition module 34.
Wherein, face expression database acquisition module 30 is from video acquisition device 1(camera) gather a large amount of Facial Expression Image frames (color image frames), changed into gray level image and use histogram equalization to carry out (pre-service) after brightness normalization to gray level image, utilize respectively human-face detector, human eye detection device to carry out the detection of people's face and location and human eye detection and location, the pedestrian's face of going forward side by side rotation, finally utilize people's face geometry feature accurately to locate expression district, generate training plan image set---the face expression database extracting for expressive features.
In preferred embodiment of the present invention, people's face geometry feature mainly comprises: " three five, front yards "." three five, front yards " is the long and wide general standard ratio of people's face that people sum up by long-term observation.People's face is comprised of three isometric regions from top to bottom: hair line is to eyebrow, and eyebrow is to nose, and nose is to lower jaw, and Here it is " three front yards "; From left to right by five wide regions, formed: left hair line is to left outside canthus, and left outside canthus is to left inside canthus, and left inside canthus is to right inside canthus, and right inside canthus is to right outside canthus, and right outside canthus is to right hair line, and Here it is " five ".According to " three five, front yards " theory, can accurately locate eyebrow to lower jaw, left outside canthus is to this rectangular area, right outside canthus, Here it is final human face expression region, expression district is carried out to the garbage that accurately the rear image in location carries and reduced, so not only can improve the accuracy rate of Expression Recognition, also can accelerate the speed of Expression Recognition, generate more effective training plan image set, for expressive features, extract.
Original expression storehouse builds the training image that module 31 is utilized face expression database, uses principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carries out expressive features extraction, and expressive features is saved as to file forms original expression feature database.
In preferred embodiment of the present invention, the foundation of original expression feature database is as follows:
(1) use PCA (Principal Component Analysis, principal component analysis (PCA)) to extract human face expression feature.
For a secondary facial image, the region that best embodies its expressive features is face, eyes and cheek.If can extract the most significant feature of expression, recognition accuracy and efficiency all will have greatly improved.And a kind of simple effective method that extracts human face expression feature is exactly principal component analysis (PCA) (PCA).Order, the facial image of a width can be expressed as a vector in space so, and the training set that width facial image forms can be expressed as, i.e. the matrix of, wherein.The target of PCA is raw data by a base conversion xagain be expressed as y.Being wherein matrix, is matrix.Feature extraction based on PCA is exactly training set xbe converted to feature set y.The corresponding eigenwert of each major component wherein, larger its corresponding major component of eigenwert is just more important, major component structure expressive features by eigenwert select progressively some from big to small can reduce the dimension of feature under the prerequisite that guarantees higher discrimination as far as possible.
Fig. 4 is training expression storehouse major component schematic diagram in preferred embodiment of the present invention, and Fig. 5 is training expression storehouse the average image schematic diagram in preferred embodiment of the present invention.As Fig. 4 and Fig. 5, the present invention tests for the JAFFE expression storehouse of standard and the expression storehouse (being called for short personalized storehouse) gathering for particular individual respectively.In JAFFE storehouse, comprise 7 kinds of expressions (6 kinds of basic facial expression+a kind of tranquil state expressions), consist of altogether 213 width expression pictures, wherein 140 width are used for training extraction major component.Personalized storehouse has gathered 900 width pictures, and wherein 450 width are used for training extraction major component.Major component that principal component analysis (PCA) obtains and the average image are carried out to as shown in Figure 4, Figure 5 in training expression storehouse, and wherein (a), for the JAFFE storehouse of expressing one's feelings, (b) is personalized storehouse.The major component in expression storehouse is found in experiment, and proper vector, quite similar with the eigenface of people's face detection.
Feature database reconstructed module 32 is used based on apart from Hash method (DBH), original expression feature database being reconstructed into structurized Hash table, as improving Expression Recognition efficiency.
Above-mentioned training plan image set is carried out to PCA dimensionality reduction to be set up after original expression feature database, the matrix that this feature database is comprised of a plurality of expressive features, during identification, each feature in expression to be measured and feature database need be carried out to similarity and compare, when feature database larger, the efficiency of identification is very low.For fear of carry out similarity comparison in whole expressive features storehouse, the present invention uses based on apart from Hash (Distance-Based Hashing, DBH) feature database is reconstructed into structurized Hash table, the feature in feature database can be assembled by similarity.With regard to only retrieving, reach the object of former feature database being carried out to cutting like this in corresponding bucket.Human face expression feature database reconstruct basic thought based on DBH is to create the Hash table that a length is, each expressive features is inserted into respectively in this Hash table, using this Hash table as searching database.
On-the-spot expressive features extraction module 33 changes the on-the-spot human face expression color image frames gathering from video acquisition device 1 gray level image into and uses histogram equalization to carry out brightness normalization to gray level image, then utilize respectively human-face detector, human eye detection device carries out the detection of people's face and location and human eye detection and location, the pedestrian's face of going forward side by side rotation, finally utilize people's face geometry feature accurately to locate expression district, generate on-the-spot human face expression, and use principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carry out on-the-spot expressive features extraction, on-the-spot expressive features is delivered to Expression Recognition module 34.
The on-the-spot expressive features that Expression Recognition module 34 is extracted on-the-spot expressive features module 33 is utilized k nearest neighbor sorting algorithm (K-Nearest Neighbor algorithm in the expressive features storehouse after reconstruct, KNN) identify human face expression, with this, come guidance machine people to predict old man and children's behavior.
Fig. 6 is people's face detection of a kind of robot of the present invention and the flow chart of steps of emotion identification method.As shown in Figure 4, people's face of a kind of robot of the present invention detects and emotion identification method, comprises the steps:
Step 601, training expression extraction.First from video acquisition device (camera), gather a large amount of human face expression color image frames, changed into gray level image and use histogram equalization to carry out brightness normalization to gray level image, then utilize human-face detector, human eye detection device to carry out the detection of people's face, human eye detection, and people's face is rotated, utilize people's face geometry feature accurately to locate expression district simultaneously, generate the training plan image set extracting for expressive features, set up face expression database.
Step 602, training expressive features is extracted.Utilize the training image of face expression database, use principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carry out expressive features extraction, and expressive features is saved as to file form original expression feature database.
In preferred embodiment of the present invention, use PCA (Principal Component Analysis, principal component analysis (PCA)) to extract human face expression feature.
For a secondary facial image, the region that best embodies its expressive features is face, eyes and cheek.If can extract the most significant feature of expression, recognition accuracy and efficiency all will have greatly improved.And a kind of simple effective method that extracts human face expression feature is exactly principal component analysis (PCA) (PCA).Order, the facial image of a width can be expressed as a vector in space so, and the training set that width facial image forms can be expressed as, i.e. the matrix of, wherein.The target of PCA is raw data by a base conversion xagain be expressed as y.Being wherein matrix, is matrix.Feature extraction based on PCA is exactly training set xbe converted to feature set y.The corresponding eigenwert of each major component wherein, larger its corresponding major component of eigenwert is just more important, major component structure expressive features by eigenwert select progressively some from big to small can reduce the dimension of feature under the prerequisite that guarantees higher discrimination as far as possible.
Step 603, reconstruct expressive features.Use is based on apart from Hash method (DBH), original expression feature database being reconstructed into structurized Hash table, as improving Expression Recognition efficiency.
Above-mentioned training plan image set is carried out to PCA dimensionality reduction to be set up after original expression feature database, the matrix that this feature database is comprised of a plurality of expressive features, during identification, each feature in expression to be measured and feature database need be carried out to similarity and compare, when feature database larger, the efficiency of identification is very low.For fear of carry out similarity comparison in whole expressive features storehouse, the present invention uses based on apart from Hash (Distance-Based Hashing, DBH) feature database is reconstructed into structurized Hash table, the feature in feature database can be assembled by similarity.With regard to only retrieving, reach the object of former feature database being carried out to cutting like this in corresponding bucket.Human face expression feature database reconstruct basic thought based on DBH is to create the Hash table that a length is, each expressive features is inserted into respectively in this Hash table, using this Hash table as searching database.
Step 604, on-the-spot expression extraction.Collection site human face expression color image frames from video acquisition device (camera), changed into gray level image and use histogram equalization to carry out brightness normalization to gray level image, then carry out the detection of people's face, human eye detection, people's face rotates and utilize people's face geometry feature accurately to locate expression district, generate on-the-spot human face expression.
Step 605, on-the-spot expressive features is extracted, and utilizes on-the-spot human face expression, uses principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carries out on-the-spot expressive features extraction, and on-the-spot expressive features is delivered to Expression Recognition module;
Step 606, human face expression identification, the expressive features that scene is extracted utilizes k nearest neighbor sorting algorithm (K-Nearest Neighbor algorithm, KNN) to identify human face expression in the expressive features storehouse of reconstruct, with this, comes guidance machine people to predict old man and children's behavior.
Below by the experiment of home services robot human face expression identification is further introduced the present invention.
Experimental system of the present invention interface as shown in Figure 7, is mainly comprised of 7 parts: (1) video frame, be used for showing from camera collection to video image, expression storehouse collection result and Expression Recognition result; (2) expression storehouse gathers frame, is used for controlling the collection in expression storehouse; (3) Expression Recognition frame, is used for controlling Expression Recognition; (4) DBH training frame, is used for training DBH parameter; (5) expression training frame, is used for controlling the training of expression feature database; (6) message box, is used for display section operation result and error message; (7) sample number frame, is used for determining the number of samples that every kind of expression gathers.
By the picture frame gathering in camera is carried out after a series of processing, system can complete happiness, sadness, surprised, detest, the identification of 6 kinds of basic facial expressions such as angry, frightened.
In sum, people's face of a kind of robot of the present invention detects and emotion recognition system and method passes through the steps such as training expression extraction, the extraction of training expressive features, reconstruct expressive features, on-the-spot expression extraction, the extraction of on-the-spot expressive features and Expression Recognition, make robot become family's supervisory-controlled robot that can carry out recognition of face and emotion recognition, realize the object of accompanying and attending to that the monitoring of old man's affective state and children are provided by robot, promoted family's monitoring of household robot and the ability of accompanying and attending to.
Above-described embodiment is illustrative principle of the present invention and effect thereof only, but not for limiting the present invention.Any those skilled in the art all can, under spirit of the present invention and category, modify and change above-described embodiment.Therefore, the scope of the present invention, should be as listed in claims.

Claims (10)

  1. People's face of 1.Yi Zhong robot detects and emotion recognition system, at least comprises:
    Face expression database acquisition module, utilize a large amount of human face expression color image frames of video acquisition device collection, by after its pre-service, utilize human-face detector, human eye detection device to carry out the pedestrian's face rotation of going forward side by side of the detection of people's face and location and human eye detection and location, finally utilize people's face geometry feature accurately to locate expression district, generate storage for the face expression database of the training plan image set of expressive features extraction;
    Original expression storehouse builds module, utilizes the training image of face expression database, carries out expressive features extraction after training image is removed to image redundancy information, and expressive features is saved as to file forms original expression feature database;
    Feature database reconstructed module, is used based on apart from Hash method, original expression feature database being reconstructed into structurized Hash table;
    On-the-spot expressive features extraction module, from this video acquisition device collection site human face expression color image frames, carry out pre-service, utilize human-face detector, human eye detection device to carry out the detection of people's face and location and human eye detection and location, the pedestrian's face of going forward side by side rotation, utilize people's face geometry feature accurately to locate expression district, generate on-the-spot Facial Expression Image, and phenomenon Facial Expression Image is carried out to on-the-spot expressive features extraction;
    Expression Recognition module, the on-the-spot expressive features that this scene expressive features module is extracted utilizes k nearest neighbor sorting algorithm to identify human face expression in the expressive features storehouse after reconstruct.
  2. 2. people's face of a kind of robot as claimed in claim 1 detects and emotion recognition system, it is characterized in that: this pre-service is for to change color image frames into gray level image and to use histogram equalization to carry out brightness normalization to gray level image.
  3. 3. people's face of a kind of robot as claimed in claim 1 detects and emotion recognition system, it is characterized in that: this original expression storehouse builds module and uses principal component analysis (PCA) dimensionality reduction to remove image redundancy information, carries out expressive features extraction.
  4. 4. people's face of a kind of robot as claimed in claim 1 detects and emotion recognition system, it is characterized in that: this video acquisition device is camera, is arranged at the head of this robot, and its position changes the head movement device that is controlled by this robot.
  5. 5. people's face of a kind of robot as claimed in claim 4 detects and emotion recognition system, it is characterized in that: this camera is arranged in the head eyeball of this robot.
  6. People's face of 6.Yi Zhong robot detects and emotion identification method, comprises the steps:
    Step 1, from a large amount of human face expression color image frames of video acquisition device collection, by after its pre-service, utilize human-face detector, human eye detection device to carry out the pedestrian's face rotation of going forward side by side of the detection of people's face and location and human eye detection and location, finally utilize people's face geometry feature accurately to locate expression district, generate storage for the face expression database of the training plan image set of expressive features extraction;
    Step 2, utilizes the training image of face expression database, carries out expressive features extraction after removing image redundancy information, and expressive features is saved as to file forms original expression feature database;
    Step 3, is used based on apart from Hash method, original expression feature database being reconstructed into structurized Hash table;
    Step 4, collection site human face expression color image frames from this video acquisition device, by after its pre-service, carries out the detection of people's face, human eye detection, people's face rotates and utilizes people's face geometry feature accurately to locate expression district, generates on-the-spot human face expression;
    Step 5, utilizes on-the-spot Facial Expression Image, removes image redundancy information, carries out on-the-spot expressive features extraction;
    Step 6, the expressive features that scene is extracted is utilized k nearest neighbor sorting algorithm identification human face expression in the expressive features storehouse of reconstruct.
  7. 7. people's face of a kind of robot as claimed in claim 6 detects and emotion identification method, it is characterized in that: this pre-service is for to change color image frames into gray level image and to use histogram equalization to carry out brightness normalization to gray level image.
  8. 8. people's face of a kind of robot as claimed in claim 6 detects and emotion identification method, it is characterized in that: in step 2 and step 5, use principal component analysis (PCA) dimensionality reduction to remove after image redundancy information, carry out expressive features extraction.
  9. 9. people's face of a kind of robot as claimed in claim 8 detects and emotion identification method, it is characterized in that: utilizing principal component analysis (PCA) to carry out feature extraction is that this training plan image set is converted to feature set, the corresponding eigenwert of each major component wherein, larger its corresponding major component of eigenwert is just more important, by the major component structure expressive features of eigenwert select progressively some from big to small.
  10. 10. people's face of a kind of robot as claimed in claim 6 detects and emotion identification method, it is characterized in that: in step 3, the Hash table that step based on original expression feature database being reconstructed into apart from Hash method to structurized Hash table for creating a length is, each expressive features is inserted into respectively in this Hash table, using this Hash table as searching database.
CN201310694112.7A 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion Expired - Fee Related CN103679203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310694112.7A CN103679203B (en) 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310694112.7A CN103679203B (en) 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion

Publications (2)

Publication Number Publication Date
CN103679203A true CN103679203A (en) 2014-03-26
CN103679203B CN103679203B (en) 2015-06-17

Family

ID=50316691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310694112.7A Expired - Fee Related CN103679203B (en) 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion

Country Status (1)

Country Link
CN (1) CN103679203B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408693A (en) * 2014-11-27 2015-03-11 苏州大学 Color image reconstruction and identification method and system
CN104486331A (en) * 2014-12-11 2015-04-01 上海元趣信息技术有限公司 Multimedia file processing method, client terminals and interaction system
CN104700094A (en) * 2015-03-31 2015-06-10 江苏久祥汽车电器集团有限公司 Face recognition method and system for intelligent robot
CN104835507A (en) * 2015-03-30 2015-08-12 渤海大学 Serial-parallel combined multi-mode emotion information fusion and identification method
CN105590084A (en) * 2014-11-03 2016-05-18 贵州亿丰升华科技机器人有限公司 Robot human face detection tracking emotion detection system
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN105843068A (en) * 2016-06-02 2016-08-10 安徽声讯信息技术有限公司 Emotion robot-based smart home environment collaborative control system
CN105938543A (en) * 2016-03-30 2016-09-14 乐视控股(北京)有限公司 Addiction-prevention-based terminal operation control method, device, and system
CN106346475A (en) * 2016-11-01 2017-01-25 上海木爷机器人技术有限公司 Robot and robot control method
CN106407882A (en) * 2016-07-26 2017-02-15 河源市勇艺达科技股份有限公司 Method and apparatus for realizing head rotation of robot by face detection
CN106778497A (en) * 2016-11-12 2017-05-31 上海任道信息科技有限公司 A kind of intelligence endowment nurse method and system based on comprehensive detection
CN106778861A (en) * 2016-12-12 2017-05-31 齐鲁工业大学 A kind of screening technique of key feature
CN107085717A (en) * 2017-05-24 2017-08-22 努比亚技术有限公司 A kind of family's monitoring method, service end and computer-readable recording medium
CN107103269A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 One kind expression feedback method and intelligent robot
CN108573695A (en) * 2017-03-08 2018-09-25 松下知识产权经营株式会社 Device, robot, method and program
CN108656130A (en) * 2018-05-31 2018-10-16 芜湖星途机器人科技有限公司 Automatically robot people is looked for
CN108875660A (en) * 2018-06-26 2018-11-23 肖哲睿 A kind of interactive robot based on cloud computing
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
CN109871870A (en) * 2019-01-15 2019-06-11 中国科学院信息工程研究所 A kind of time sensitivity method for detecting abnormality based on arest neighbors in high amount of traffic
CN110135357A (en) * 2019-05-17 2019-08-16 西南大学 A kind of happiness real-time detection method based on long-range remote sensing
CN110472610A (en) * 2019-08-22 2019-11-19 王旭敏 A kind of face identification device and its method from depth optimization
CN110555401A (en) * 2019-08-26 2019-12-10 浙江大学 self-adaptive emotion expression system and method based on expression recognition
WO2020057570A1 (en) * 2018-09-18 2020-03-26 AI Gaspar Limited System and process for identification and illumination of anatomical sites of a person and articles at such sites
CN113059563A (en) * 2021-03-25 2021-07-02 湖南翰坤实业有限公司 Intelligent accompanying robot
CN113059574A (en) * 2021-03-25 2021-07-02 湖南翰坤实业有限公司 Intelligent accompanying method of accompanying robot and intelligent accompanying robot

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT201900020548A1 (en) * 2019-11-07 2021-05-07 Identivisuals S R L SYSTEM AND METHOD OF DETECTION OF PSYCHOPHYSICAL WELL-BEING IN A CLOSED ENVIRONMENT

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101786272A (en) * 2010-01-05 2010-07-28 深圳先进技术研究院 Multisensory robot used for family intelligent monitoring service
CN102566474A (en) * 2012-03-12 2012-07-11 上海大学 Interaction system and method for robot with humanoid facial expressions, and face detection and tracking method
CN103186774A (en) * 2013-03-21 2013-07-03 北京工业大学 Semi-supervised learning-based multi-gesture facial expression recognition method
CN103268156A (en) * 2013-05-24 2013-08-28 徐州医学院 Wrist-strap-type gesture recognition device based on human-computer interaction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101786272A (en) * 2010-01-05 2010-07-28 深圳先进技术研究院 Multisensory robot used for family intelligent monitoring service
CN102566474A (en) * 2012-03-12 2012-07-11 上海大学 Interaction system and method for robot with humanoid facial expressions, and face detection and tracking method
CN103186774A (en) * 2013-03-21 2013-07-03 北京工业大学 Semi-supervised learning-based multi-gesture facial expression recognition method
CN103268156A (en) * 2013-05-24 2013-08-28 徐州医学院 Wrist-strap-type gesture recognition device based on human-computer interaction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
蔡则苏等: "基于PCA特征提取和距离哈希K近邻分类的人脸表情识别", 《智能计算机与应用》, vol. 2, no. 1, 29 February 2012 (2012-02-29), pages 1 - 3 *
赵思成: "人脸表情识别及其在视频分类与推荐中的应用", 《中国优秀硕士学位论文全文数据库》, 6 January 2012 (2012-01-06), pages 13 - 23 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590084A (en) * 2014-11-03 2016-05-18 贵州亿丰升华科技机器人有限公司 Robot human face detection tracking emotion detection system
CN104408693A (en) * 2014-11-27 2015-03-11 苏州大学 Color image reconstruction and identification method and system
CN104486331A (en) * 2014-12-11 2015-04-01 上海元趣信息技术有限公司 Multimedia file processing method, client terminals and interaction system
CN104835507A (en) * 2015-03-30 2015-08-12 渤海大学 Serial-parallel combined multi-mode emotion information fusion and identification method
CN104835507B (en) * 2015-03-30 2018-01-16 渤海大学 A kind of fusion of multi-mode emotion information and recognition methods gone here and there and combined
CN104700094A (en) * 2015-03-31 2015-06-10 江苏久祥汽车电器集团有限公司 Face recognition method and system for intelligent robot
CN107103269A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 One kind expression feedback method and intelligent robot
US11819996B2 (en) 2016-02-23 2023-11-21 Yutou Technology (Hangzhou) Co., Ltd. Expression feedback method and smart robot
WO2017143951A1 (en) * 2016-02-23 2017-08-31 芋头科技(杭州)有限公司 Expression feedback method and smart robot
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN105843118B (en) * 2016-03-25 2018-07-27 北京光年无限科技有限公司 A kind of robot interactive method and robot system
CN105938543A (en) * 2016-03-30 2016-09-14 乐视控股(北京)有限公司 Addiction-prevention-based terminal operation control method, device, and system
CN105843068A (en) * 2016-06-02 2016-08-10 安徽声讯信息技术有限公司 Emotion robot-based smart home environment collaborative control system
CN106407882A (en) * 2016-07-26 2017-02-15 河源市勇艺达科技股份有限公司 Method and apparatus for realizing head rotation of robot by face detection
CN106346475A (en) * 2016-11-01 2017-01-25 上海木爷机器人技术有限公司 Robot and robot control method
CN106778497A (en) * 2016-11-12 2017-05-31 上海任道信息科技有限公司 A kind of intelligence endowment nurse method and system based on comprehensive detection
CN106778861A (en) * 2016-12-12 2017-05-31 齐鲁工业大学 A kind of screening technique of key feature
CN108573695A (en) * 2017-03-08 2018-09-25 松下知识产权经营株式会社 Device, robot, method and program
CN107085717A (en) * 2017-05-24 2017-08-22 努比亚技术有限公司 A kind of family's monitoring method, service end and computer-readable recording medium
CN108656130A (en) * 2018-05-31 2018-10-16 芜湖星途机器人科技有限公司 Automatically robot people is looked for
CN108875660A (en) * 2018-06-26 2018-11-23 肖哲睿 A kind of interactive robot based on cloud computing
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
WO2020057570A1 (en) * 2018-09-18 2020-03-26 AI Gaspar Limited System and process for identification and illumination of anatomical sites of a person and articles at such sites
CN109871870B (en) * 2019-01-15 2021-05-25 中国科学院信息工程研究所 Nearest neighbor-based time sensitivity anomaly detection method in large data flow
CN109871870A (en) * 2019-01-15 2019-06-11 中国科学院信息工程研究所 A kind of time sensitivity method for detecting abnormality based on arest neighbors in high amount of traffic
CN110135357A (en) * 2019-05-17 2019-08-16 西南大学 A kind of happiness real-time detection method based on long-range remote sensing
CN110135357B (en) * 2019-05-17 2021-09-21 西南大学 Happiness real-time detection method based on remote sensing
CN110472610A (en) * 2019-08-22 2019-11-19 王旭敏 A kind of face identification device and its method from depth optimization
CN110555401A (en) * 2019-08-26 2019-12-10 浙江大学 self-adaptive emotion expression system and method based on expression recognition
CN113059563A (en) * 2021-03-25 2021-07-02 湖南翰坤实业有限公司 Intelligent accompanying robot
CN113059574A (en) * 2021-03-25 2021-07-02 湖南翰坤实业有限公司 Intelligent accompanying method of accompanying robot and intelligent accompanying robot

Also Published As

Publication number Publication date
CN103679203B (en) 2015-06-17

Similar Documents

Publication Publication Date Title
CN103679203B (en) Robot system and method for detecting human face and recognizing emotion
JP6929366B2 (en) Driver monitoring and response system
Hu et al. Bio-inspired embedded vision system for autonomous micro-robots: The LGMD case
CN110605724B (en) Intelligence endowment robot that accompanies
EP3709134A1 (en) Tool and method for annotating a human pose in 3d point cloud data
CN112842690B (en) Machine vision with dimension data reduction
CN105825268B (en) The data processing method and system of object manipulator action learning
CN103116279B (en) Vague discrete event shared control method of brain-controlled robotic system
CN108127669A (en) A kind of robot teaching system and implementation based on action fusion
CN107428004A (en) The automatic collection of object data and mark
CN105912980A (en) Unmanned plane and unmanned plane system
CN101947152A (en) Electroencephalogram-voice control system and working method of humanoid artificial limb
WO2021016394A1 (en) Visual teach and repeat mobile manipulation system
Xiong et al. S3D-CNN: skeleton-based 3D consecutive-low-pooling neural network for fall detection
CN102980454B (en) Explosive ordnance disposal (EOD) method of robot EOD system based on brain and machine combination
Wu et al. Anticipating daily intention using on-wrist motion triggered sensing
CN106377228A (en) Monitoring and hierarchical-control method for state of unmanned aerial vehicle operator based on Kinect
CN116617011B (en) Wheelchair control method, device, terminal and medium based on physiological signals
CN105590084A (en) Robot human face detection tracking emotion detection system
CN109473168A (en) A kind of medical image robot and its control, medical image recognition methods
CN111134974B (en) Wheelchair robot system based on augmented reality and multi-mode biological signals
Salh et al. Intelligent surveillance robot
Ageishi et al. Real-time hand-gesture recognition based on deep neural network
CN113894779A (en) Multi-mode data processing method applied to robot interaction
Mule et al. In-house object detection system for visually impaired

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150617

Termination date: 20191218