CN103679203B - Robot system and method for detecting human face and recognizing emotion - Google Patents

Robot system and method for detecting human face and recognizing emotion Download PDF

Info

Publication number
CN103679203B
CN103679203B CN201310694112.7A CN201310694112A CN103679203B CN 103679203 B CN103679203 B CN 103679203B CN 201310694112 A CN201310694112 A CN 201310694112A CN 103679203 B CN103679203 B CN 103679203B
Authority
CN
China
Prior art keywords
expression
module
human
robot
image
Prior art date
Application number
CN201310694112.7A
Other languages
Chinese (zh)
Other versions
CN103679203A (en
Inventor
蔡则苏
王丙祥
王玲
Original Assignee
江苏久祥汽车电器集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏久祥汽车电器集团有限公司 filed Critical 江苏久祥汽车电器集团有限公司
Priority to CN201310694112.7A priority Critical patent/CN103679203B/en
Publication of CN103679203A publication Critical patent/CN103679203A/en
Application granted granted Critical
Publication of CN103679203B publication Critical patent/CN103679203B/en

Links

Abstract

The invention discloses a robot system and method for detecting a human face and recognizing emotion. The system comprises a human face expression library collecting module, an original expression library building module, a feature library rebuilding module, a field expression feature extracting module and an expression recognizing module. The human face expression library collecting module is used for collecting a large number of human face expression color image frames through a video collecting device and processing the human face expression color image frames to form a human face expression library. The original expression library building module is used for extracting expression features after removing image redundant information of training images in the human face expression library to form an original expression feature library. The feature library rebuilding module is used for rebuilding the original expression feature library as a structuralized hash table through the distance hash method. The field expression feature extracting module is used for collecting field human face expression color image frames through the video collecting device and extracting field expression features. The expression recognizing module is used for recognizing the human face expression through the k neighbor sorting algorithm in the feature library in which the field expression features extracted by field expression feature extracting module are rebuilt.

Description

The Face datection of robot and emotion recognition system and method

Technical field

The present invention relates to intelligent robot technology field, particularly relate to a kind of Face datection and emotion recognition system and method for the robot for intelligent robot.

Background technology

The facial recognition capability of existing home-services robot is limited, especially emotion recognition ability is very limited, omnibearing identification cannot be carried out to the affective state of old man in family or children, such as Chinese patent application CN200720077448.9 describes a kind of intelligent robot with recognition of face, and it comprises: the robot body with thigh motor, arm motor, neck sub-motor and loudspeaker, to be installed on described robot body and camera head for absorbing facial image, and compare to identify the face identification unit of absorbed facial image for the facial image absorbed by described camera head and the facial image prestored, Chinese patent application 201220365083.0 describes a kind of safety protection robot of recognition of face, it is by safety protection robot body, be arranged on described safety protection robot body surface for absorbing the camera head of facial image, be contained in affiliated safety protection robot body interior and compare the face recognition module of identification for the facial image that absorbed by described camera head and the facial image prestored, the data processing module of instruction is sent to other module for the treatment of data, for sending instruction for user with the Remote module composition making safety protection robot perform security protection action, face recognition module in this two pieces patented claim, the facial image absorbed only by described camera head compares to identify absorbed facial image with the facial image prestored, but recognition capability is limited, time particularly for storage more facial image, effectively cannot complete the identification of face.

In addition, Chinese utility model patent CN201120506957.5 then provides one and helps the elderly and disabled aiding robot, it comprises a single camera vision system that can rotate, for identifying daily necessities and visiting guest and obstructing objects, a pair of for realize staff action the five fingers shape Apery manipulator and for drive machines people walking triangle coupled wheel drives structure this help the elderly, with disabled aiding robot, although there is anti-theft monitoring, safety inspection, monitoring diagnosis and treatment, auxiliary walking, article carrying, home wiring control, sanitation and hygiene, home entertaining, give the correct time and waken, the functions such as children education, but there is no the function of Face datection and emotion recognition, effectively cannot identify the state of the old man in family and child.

In sum, the existing Face datection for domestic robot application and emotion recognition system are primarily of removable camera and fixing camera composition, just realize face identification system merely, intelligence degree is limited, and cannot other information around perception, comprehensively can not analyze the affective state of old man and child, thus carry out the omnibearing service of accompanying and attending to.Therefore, be necessary to propose a kind of technological means, to solve the problem in fact.

Summary of the invention

For overcoming the deficiency that above-mentioned prior art exists, the fundamental purpose of the present invention is the Face datection and the emotion recognition system and method that provide a kind of robot, it makes robot become family's supervisory-controlled robot that can carry out recognition of face and emotion recognition, achieve the object of accompanying and attending to that the monitoring of old man's affective state and children are provided by robot, improve family's monitoring of household robot and ability of accompanying and attending to.

For reaching above-mentioned and other object, the present invention proposes a kind of Face datection and emotion recognition system of robot, at least comprises:

Face expression database acquisition module, utilize a large amount of human face expression color image frames of video acquisition device collection, after its pre-service, utilize human-face detector, human eye detection device carries out Face datection and go forward side by side in location and human eye detection and location, and pedestrian's face rotates, finally utilize face geometry feature accurately to locate expression district, generate the face expression database storing the training plan image set being used for human facial feature extraction;

Original expression storehouse builds module, utilizes the training image of face expression database, carries out human facial feature extraction after removing image redundancy information to training image, and expressive features is saved as file and form original expression feature database;

Feature database reconstructed module, uses, based on distance Hash method, original expression feature database is reconstructed into structurized Hash table;

On-the-spot human facial feature extraction module, after carrying out pre-service from this video acquisition device collection site human face expression color image frames, utilize human-face detector, human eye detection device carries out Face datection and location and human eye detection and location, pedestrian's face of going forward side by side rotates, face geometry feature is utilized accurately to locate expression district, generate on-the-spot Facial Expression Image, and on-the-spot human facial feature extraction is carried out to phenomenon Facial Expression Image;

Expression Recognition module, the on-the-spot expressive features this on-the-spot expressive features module extracted utilizes k nearest neighbor sorting algorithm to identify human face expression in expressive features storehouse after reconstitution.

Further, this pre-service is for changing color image frames into gray level image and using histogram equalization to carry out brightness normalization to gray level image.

Further, this original expression storehouse builds module and uses principal component analysis (PCA) dimensionality reduction to remove image redundancy information, carries out human facial feature extraction.

Further, this video acquisition device is camera, is arranged at the head of this robot, and its position changes the head movement device being controlled by this robot.

Further, this camera is arranged in the head eyeball of this robot.

For achieving the above object, the present invention also provides a kind of Face datection and emotion identification method of robot, comprises the steps:

Step one, from a large amount of human face expression color image frames of video acquisition device collection, after its pre-service, utilize human-face detector, human eye detection device carries out Face datection and go forward side by side in location and human eye detection and location, and pedestrian's face rotates, finally utilize face geometry feature accurately to locate expression district, generate the face expression database storing the training plan image set being used for human facial feature extraction;

Step 2, utilizes the training image of face expression database, carries out human facial feature extraction after removing image redundancy information, and expressive features is saved as file and form original expression feature database;

Step 3, uses, based on distance Hash method, original expression feature database is reconstructed into structurized Hash table;

Step 4, collection site human face expression color image frames from this video acquisition device, after its pre-service, carries out Face datection, human eye detection, face rotates and utilize face geometry feature accurately to locate expression district, generate on-the-spot human face expression;

Step 5, utilizes on-the-spot Facial Expression Image, removes image redundancy information, carries out on-the-spot human facial feature extraction;

Step 6, the expressive features extracted at scene utilizes k nearest neighbor sorting algorithm identification human face expression in the expressive features storehouse of reconstruct.

Further, this pre-service is for changing color image frames into gray level image and using histogram equalization to carry out brightness normalization to gray level image.

Further, in step 2 and step 5, after using principal component analysis (PCA) dimensionality reduction to remove image redundancy information, carry out human facial feature extraction.

Further, utilizing principal component analysis (PCA) to carry out feature extraction is that this training plan image set is converted to feature set, the wherein corresponding eigenwert of each major component, the major component of eigenwert its correspondence larger is more important, by the major component structure expressive features of eigenwert select progressively some from big to small.

Further, in step 3, the step original expression feature database being reconstructed into structurized Hash table based on distance Hash method is create the Hash table that a length is, each expressive features is inserted into respectively in this Hash table, using this Hash table as searching database.

Compared with prior art, the Face datection of a kind of robot of the present invention and emotion recognition system and method are by the step such as training expression extraction, training human facial feature extraction, reconstruct expressive features, on-the-spot expression extraction, on-the-spot human facial feature extraction and Expression Recognition, robot is made to become family's supervisory-controlled robot that can carry out recognition of face and emotion recognition, achieve the object of accompanying and attending to that the monitoring of old man's affective state and children are provided by robot, improve family's monitoring of household robot and ability of accompanying and attending to.

Accompanying drawing explanation

The robot that Fig. 1 applies by the present invention system architecture diagram;

Fig. 2 is the vibrational power flow schematic diagram of robot in the preferred embodiment of the present invention;

Fig. 3 is the Face datection of a kind of robot of the present invention and the system architecture diagram of emotion recognition system;

Fig. 4 is training expression storehouse major component schematic diagram in present pre-ferred embodiments;

Fig. 5 is training expression storehouse the average image schematic diagram in present pre-ferred embodiments

Fig. 6 is the Face datection of a kind of robot of the present invention and the flow chart of steps of emotion identification method;

The experimental system interface schematic diagram that Face datection and emotion identification method that Fig. 7 is a kind of robot of the present invention adopt.

Embodiment

Below by way of specific instantiation and accompanying drawings embodiments of the present invention, those skilled in the art can understand other advantage of the present invention and effect easily by content disclosed in the present specification.The present invention is also implemented by other different instantiation or is applied, and the every details in this instructions also can based on different viewpoints and application, carries out various modification and change not deviating under spirit of the present invention.

Before the Face datection introducing the present invention and emotion recognition system, first to the present invention the structure of robot applied be described.The robot that Fig. 1 applies by the present invention system architecture diagram.As shown in Figure 1, the robot that the present invention applied comprises master control system 210, telecontrol equipment 220, headwork device 230, man-machine interactive system 240, video acquisition module 250, radio receiving transmitting module 260, voice acquisition module 270, mileage measurement mechanism 280 and receiver of remote-control sytem 290, other each several part co-ordinations of master control system 210 control, comprise hardware and software two parts, hardware is by DSP, the compositions such as MCU, software can be divided into path planning module, detection of obstacles, Face datection, emotion detects, environment measuring, abnormality detection, mileage information and information fusion module, Face datection is connected video acquisition device 1 with emotion recognition, for gathering the expressions such as the happiness, anger, grief and joy of face information and identification people, environment sensing is connected video acquisition module 2 with abnormality detection, be mainly used in perception external environment condition and detect external environment condition abnormal information, mileage information connects mileage information measurement mechanism 280, to obtain mileage information, detection of obstacles connects ultrasonic sensor, for detecting the distance of outer barrie thing to robot, information fusion is by external video acquisition module, voice acquisition module, ultrasonic sensor, the information that electric power detection and charging module collect and mileage information merge, to judge that robot should carry out charging or continue service, the need of avoiding barrier, the need of response calling order, telecontrol equipment 220 comprises chassis driver, motor drive module, direct current generator M1/M2, driving wheel 1/2 and gear train, universal wheel, electric battery, ultrasonic sensor, electric power detection and charging module, telecontrol equipment is mainly used in receiving master control system 210 orders and walks with drive machines people, complete ultrasonic ranging in the process of walking and feed back to master control system 210 to carry out subsequent control, in service process, detect battery electric quantity in time and feed back to master control system 210 to judge whether that needs charge, headwork device 230 comprises head control, steering wheel 1/2 and drive mechanism, head pitching and left rotation and right rotation are carried out in order for receiving master control system 210, drive the motion being installed on the monocular cam 1/2 of the video acquisition device 1/2 of head to obtain image information by head, in the present invention, steering wheel 1 is responsible for the pitch rotation of control neck, steering wheel 2 controls the left-right rotation of head, thus can control the rotation of eyeball (video acquisition device 1 and video acquisition device 2), man-machine interactive system 240 connects master control system 210, and it comprises the conventional input-output device such as display screen, keyboard, articulatory system, is beneficial to display device people state and artificial input information's order, video acquisition module 250 comprises video acquisition device 1 and video acquisition device 2, video acquisition device 1 comprises common monocular cam and an Acquisition Circuit thereof, for obtaining Face datection and emotion recognition information needed, video acquisition device 2 comprises wide-angle monocular cam and an Acquisition Circuit thereof, for obtaining environment sensing and abnormality detection information needed, video acquisition module is installed on robot head, and the change of its position is controlled by head movement device 230, radio receiving transmitting module 260 sends information and reception order by communication network to user (owner) under master control system 210 controls, voice acquisition module 270 gathers extraneous voice messaging to master control system 210 to judge whether voice messaging or order, mileage measurement mechanism 280 processes for measuring walking mileage and mileage information being uploaded to master control system 210.

Fig. 2 is the vibrational power flow schematic diagram of robot in the preferred embodiment of the present invention.In present pre-ferred embodiments, master control system passes through left and right arm and the head rotation of communication module control step electric machine control system control, the Face datection of master control system and emotion recognition module obtain the expression such as the face information that camera 1 gathers and the happiness, anger, grief and joy identifying people by USB interface, context detection module obtains environment sensing and abnormality detection information needed by USB interface, and master control system connects multiple Mike (voice acquisition device) by deconcentrator and gathers extraneous voice messaging to judge whether voice messaging or order; Master control system is connected with chassis controller, lower computer control system is controlled to control left and right wheels motor movement by chassis controller, utilize ultrasonic detector to carry out avoiding obstacles by supersonic wave path planning simultaneously, and control electric power detection and charging module carries out electric power detection and automatic charging.

Fig. 3 is the Face datection of a kind of robot of the present invention and the system architecture diagram of emotion recognition system.As shown in Figure 3, the Face datection of a kind of robot of the present invention and emotion recognition system, at least comprise face expression database acquisition module 30, original expression feature database build module 31, feature database reconstructed module 32, on-the-spot human facial feature extraction module 33 and Expression Recognition module 34.

Wherein, face expression database acquisition module 30 is from video acquisition device 1(camera) gather a large amount of Facial Expression Image frame (color image frames), changed into gray level image and after using histogram equalization to carry out brightness normalization to gray level image (pre-service), utilize human-face detector respectively, human eye detection device carries out Face datection and location and human eye detection and location, pedestrian's face of going forward side by side rotates, finally utilize face geometry feature accurately to locate expression district, generate training plan image set---the face expression database being used for human facial feature extraction.

In present pre-ferred embodiments, face geometry feature mainly comprises: " three five, front yards "." three five, front yards " is that people are grown and wide general standard ratio by the face that long-term observation sums up.Face is made up of three isometric regions from top to bottom: hair line is to eyebrow, and eyebrow is to nose, and nose is to lower jaw, and Here it is " three front yards "; From left to right be made up of five wide regions: left hair line is to left outside canthus, and left outside canthus is to left inside canthus, and left inside canthus is to right inside canthus, and right inside canthus is to right outside canthus, and right outside canthus is to right hair line, and Here it is " five ".Theoretical according to " three five, front yards ", can accurately locate eyebrow to lower jaw, left outside canthus is to this rectangular area, right outside canthus, Here it is final human face expression region, carry out to expression district the garbage that accurate post-relocation image carries to decrease, so not only can improve the accuracy rate of Expression Recognition, also can accelerate the speed of Expression Recognition, generate more effective training plan image set, for human facial feature extraction.

Original expression storehouse builds the training image that module 31 utilizes face expression database, uses principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carries out human facial feature extraction, and expressive features is saved as file and form original expression feature database.

In present pre-ferred embodiments, the foundation of original expression feature database is as follows:

(1) PCA (Principal Component Analysis, principal component analysis (PCA)) is used to extract human face expression feature.

For a secondary facial image, the region best embodying its expressive features is face, eyes and cheek.If can extract the most significant feature of expression, recognition accuracy and efficiency all will have greatly improved.And a kind of simple effective method extracting human face expression feature is exactly principal component analysis (PCA) (PCA).Order, so the facial image of a width can be expressed as a vector in space, and the training set of width facial image composition can be expressed as, i.e. the matrix of, wherein.The target of PCA is raw data by a base conversion xagain be expressed as y, namely.Being wherein matrix, is matrix.The feature extraction of Based PC A is exactly training set xbe converted to feature set y.The wherein corresponding eigenwert of each major component, the major component of eigenwert its correspondence larger is more important, by the major component structure expressive features of eigenwert select progressively some from big to small, under the prerequisite ensureing higher discrimination, the dimension of feature can be reduced as far as possible.

Fig. 4 is training expression storehouse major component schematic diagram in present pre-ferred embodiments, and Fig. 5 is training expression storehouse the average image schematic diagram in present pre-ferred embodiments.As Fig. 4 and Fig. 5, the present invention respectively for standard JAFFE express one's feelings storehouse and for particular individual gather expression storehouse (being called for short personalized storehouse) test.Comprise 7 kinds of expressions (6 kinds of basic facial expressions+a kind of tranquil state expression) in JAFFE storehouse, be made up of 213 width expression pictures altogether, wherein 140 width are used for training and extract major component.Personalized storehouse acquires 900 width pictures, and wherein 450 width are used for training and extract major component.Carry out major component that principal component analysis (PCA) obtains and the average image as shown in Figure 4, Figure 5 to training expression storehouse, wherein (a) is that JAFFE expresses one's feelings storehouse, and (b) is personalized storehouse.Experiment finds the major component in expression storehouse, i.e. proper vector, the eigenface with Face datection is quite similar.

Feature database reconstructed module 32 uses, based on distance Hash method (DBH), original expression feature database is reconstructed into structurized Hash table, is used as to improve Expression Recognition efficiency.

Carry out after PCA dimensionality reduction sets up original expression feature database to above-mentioned training plan image set, the matrix that this feature database is made up of multiple expressive features, each feature in expression to be measured and feature database need be carried out similarity comparison during identification, the efficiency identified when feature database larger is very low.In order to avoid carrying out similarity comparison in whole expressive features storehouse, the present invention uses based on distance Hash (Distance-Based Hashing, DBH) feature database is reconstructed into structurized Hash table, the feature in feature database is assembled by similarity.Like this with regard to only retrieving in corresponding bucket, reach the object of former feature database being carried out to cutting.Human face expression feature database reconstruct basic thought based on DBH is the Hash table that an establishment length is, each expressive features is inserted into respectively in this Hash table, using this Hash table as searching database.

On-the-spot human facial feature extraction module 33 changes the on-the-spot human face expression color image frames gathered from video acquisition device 1 into gray level image and uses histogram equalization to carry out brightness normalization to gray level image, then human-face detector is utilized respectively, human eye detection device carries out Face datection and location and human eye detection and location, pedestrian's face of going forward side by side rotates, face geometry feature is finally utilized accurately to locate expression district, generate on-the-spot human face expression, and use principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carry out on-the-spot human facial feature extraction, on-the-spot expressive features is delivered to Expression Recognition module 34.

The on-the-spot expressive features on-the-spot expressive features module 33 extracted of Expression Recognition module 34 utilizes k nearest neighbor sorting algorithm (K-Nearest Neighbor algorithm in expressive features storehouse after reconstitution, KNN) identify human face expression, carry out the behavior of guidance machine people to old man and children with this and predict.

Fig. 6 is the Face datection of a kind of robot of the present invention and the flow chart of steps of emotion identification method.As shown in Figure 4, the Face datection of a kind of robot of the present invention and emotion identification method, comprise the steps:

Step 601, training expression extraction.First from video acquisition device (camera), a large amount of human face expression color image frames is gathered, changed into gray level image and use histogram equalization to carry out brightness normalization to gray level image, then carry out utilizing human-face detector, human eye detection device carries out Face datection, human eye detection, and face is rotated, utilize face geometry feature accurately to locate expression district simultaneously, generate the training plan image set being used for human facial feature extraction, namely set up face expression database.

Step 602, training human facial feature extraction.Utilize the training image of face expression database, use principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carry out human facial feature extraction, and expressive features is saved as file and form original expression feature database.

In present pre-ferred embodiments, PCA (Principal Component Analysis, principal component analysis (PCA)) is used to extract human face expression feature.

For a secondary facial image, the region best embodying its expressive features is face, eyes and cheek.If can extract the most significant feature of expression, recognition accuracy and efficiency all will have greatly improved.And a kind of simple effective method extracting human face expression feature is exactly principal component analysis (PCA) (PCA).Order, so the facial image of a width can be expressed as a vector in space, and the training set of width facial image composition can be expressed as, i.e. the matrix of, wherein.The target of PCA is raw data by a base conversion xagain be expressed as y, namely.Being wherein matrix, is matrix.The feature extraction of Based PC A is exactly training set xbe converted to feature set y.The wherein corresponding eigenwert of each major component, the major component of eigenwert its correspondence larger is more important, by the major component structure expressive features of eigenwert select progressively some from big to small, under the prerequisite ensureing higher discrimination, the dimension of feature can be reduced as far as possible.

Step 603, reconstruct expressive features.Use, based on distance Hash method (DBH), original expression feature database is reconstructed into structurized Hash table, be used as to improve Expression Recognition efficiency.

Carry out after PCA dimensionality reduction sets up original expression feature database to above-mentioned training plan image set, the matrix that this feature database is made up of multiple expressive features, each feature in expression to be measured and feature database need be carried out similarity comparison during identification, the efficiency identified when feature database larger is very low.In order to avoid carrying out similarity comparison in whole expressive features storehouse, the present invention uses based on distance Hash (Distance-Based Hashing, DBH) feature database is reconstructed into structurized Hash table, the feature in feature database is assembled by similarity.Like this with regard to only retrieving in corresponding bucket, reach the object of former feature database being carried out to cutting.Human face expression feature database reconstruct basic thought based on DBH is the Hash table that an establishment length is, each expressive features is inserted into respectively in this Hash table, using this Hash table as searching database.

Step 604, on-the-spot expression extraction.Collection site human face expression color image frames from video acquisition device (camera), changed into gray level image and use histogram equalization to carry out brightness normalization to gray level image, then carry out Face datection, human eye detection, face rotate and utilize face geometry feature accurately to locate expression district, generate on-the-spot human face expression.

Step 605, on-the-spot human facial feature extraction, utilizes on-the-spot human face expression, uses principal component analysis (PCA) (PCA) dimensionality reduction to remove image redundancy information, carries out on-the-spot human facial feature extraction, and on-the-spot expressive features is delivered to Expression Recognition module;

Step 606, expression recognition, the expressive features extracted at scene utilizes k nearest neighbor sorting algorithm (K-Nearest Neighbor algorithm, KNN) to identify human face expression in the expressive features storehouse of reconstruct, carrys out the behavior of guidance machine people to old man and children predict with this.

Below by way of to the experiment of home-services robot expression recognition, the present invention is introduced further.

Experimental system interface of the present invention as shown in Figure 7, primarily of 7 part compositions: (1) video frame, be used for showing from camera collection to video image, expression storehouse collection result and Expression Recognition result; (2) express one's feelings storehouse gather frame, be used for control expression storehouse collection; (3) Expression Recognition frame, is used for controlling Expression Recognition; (4) DBH trains frame, is used for training DBH parameter; (5) expression training frame, be used for control expression feature database training; (6) message box, is used for display section operation result and error message; (7) sample number frame, is used for determining often kind of number of samples gathered of expressing one's feelings.

After a series of process is carried out to the picture frame gathered in camera, system can complete happiness, sadness, surprised, detest, the identification of 6 kinds of basic facial expressions such as angry, frightened.

In sum, the Face datection of a kind of robot of the present invention and emotion recognition system and method are by the step such as training expression extraction, training human facial feature extraction, reconstruct expressive features, on-the-spot expression extraction, on-the-spot human facial feature extraction and Expression Recognition, robot is made to become family's supervisory-controlled robot that can carry out recognition of face and emotion recognition, achieve the object of accompanying and attending to that the monitoring of old man's affective state and children are provided by robot, improve family's monitoring of household robot and ability of accompanying and attending to.

Above-described embodiment is illustrative principle of the present invention and effect thereof only, but not for limiting the present invention.Any those skilled in the art all without prejudice under spirit of the present invention and category, can carry out modifying to above-described embodiment and change.Therefore, the scope of the present invention, should listed by claims.

Claims (3)

1. the Face datection of a robot and emotion recognition system, described robot comprises master control system, telecontrol equipment, headwork device, man-machine interactive system, video acquisition module, radio receiving transmitting module, voice acquisition module, mileage measurement mechanism and receiver of remote-control sytem, described master control system comprises path planning module, detection of obstacles module, Face datection and emotion recognition system, context detection module, abnormality detection module, mileage information module and information fusion module, described Face datection and emotion recognition system connect video acquisition module, for gathering face information and identifying that the happiness, anger, grief and joy of people are expressed one's feelings, described context detection module and abnormality detection model calling video acquisition module, be respectively used to perception external environment condition and detect external environment condition abnormal information, described mileage information model calling mileage measurement mechanism, to obtain mileage information, described detection of obstacles model calling ultrasonic sensor, for detecting the distance of outer barrie thing to robot, described information fusion module is by described video acquisition module, voice acquisition module, ultrasonic sensor, the information that electric power detection and charging module collect and mileage information merge, to judge that robot should carry out charging or continue service, the need of avoiding barrier, the need of response calling order, described telecontrol equipment comprises chassis driver, motor drive module, direct current generator, driving wheel and gear train, universal wheel, electric battery, ultrasonic sensor, electric power detection and charging module, described telecontrol equipment is walked with drive machines people for receiving described master control system order, complete ultrasonic ranging in the process of walking and feed back to described master control system to carry out subsequent control, electric power detection and charging module detect battery electric quantity in time and feed back to described master control system to judge whether that needs charge in service process, described headwork device comprises head control, steering wheel and drive mechanism, head pitching and left rotation and right rotation are carried out in order for receiving described master control system, drive the motion being installed on the monocular cam of the video acquisition module of head to obtain image information by head, described video acquisition module comprises video acquisition device 1 and video acquisition device 2, video acquisition device 1 comprises a monocular cam and Acquisition Circuit thereof, for obtaining Face datection and emotion recognition information needed, video acquisition device 2 comprises wide-angle monocular cam and an Acquisition Circuit thereof, for obtaining environment sensing and abnormality detection information needed, described video acquisition module is installed on robot head, the change of its position is controlled by described headwork device, described Face datection and emotion recognition system at least comprise:
Face expression database acquisition module, video acquisition module is utilized to gather human face expression color image frames, after its pre-service, utilize human-face detector, human eye detection device carries out Face datection and go forward side by side in location and human eye detection and location, and pedestrian's face rotates, finally utilize the face geometry feature in five, three front yard to navigate to eyebrow to lower jaw, left outside canthus to this rectangular area, right outside canthus to expression district, generate the face expression database storing the training plan image set being used for human facial feature extraction;
Original expression storehouse builds module, utilizes the training image of face expression database, carries out human facial feature extraction after removing image redundancy information to training image, and expressive features is saved as file and form original expression feature database;
Feature database reconstructed module, uses, based on distance Hash method, original expression feature database is reconstructed into structurized Hash table;
On-the-spot human facial feature extraction module, after carrying out pre-service from this video acquisition module collection site human face expression color image frames, utilize human-face detector, human eye detection device carries out Face datection and location and human eye detection and location, pedestrian's face of going forward side by side rotates, utilize face geometry feature to location, expression district, generate on-the-spot Facial Expression Image, and on-the-spot human facial feature extraction is carried out to on-the-spot Facial Expression Image;
Expression Recognition module, the on-the-spot expressive features this on-the-spot expressive features module extracted utilizes k nearest neighbor sorting algorithm to identify human face expression in expressive features storehouse after reconstitution.
2. the Face datection of a kind of robot as claimed in claim 1 and emotion recognition system, is characterized in that: this pre-service is for changing color image frames into gray level image and using histogram equalization to carry out brightness normalization to gray level image.
3. the Face datection of a kind of robot as claimed in claim 1 and emotion recognition system, is characterized in that: this original expression storehouse builds module and uses principal component analysis (PCA) dimensionality reduction to remove image redundancy information, carries out human facial feature extraction.
CN201310694112.7A 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion CN103679203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310694112.7A CN103679203B (en) 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310694112.7A CN103679203B (en) 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion

Publications (2)

Publication Number Publication Date
CN103679203A CN103679203A (en) 2014-03-26
CN103679203B true CN103679203B (en) 2015-06-17

Family

ID=50316691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310694112.7A CN103679203B (en) 2013-12-18 2013-12-18 Robot system and method for detecting human face and recognizing emotion

Country Status (1)

Country Link
CN (1) CN103679203B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590084A (en) * 2014-11-03 2016-05-18 贵州亿丰升华科技机器人有限公司 Robot human face detection tracking emotion detection system
CN104408693A (en) * 2014-11-27 2015-03-11 苏州大学 Color image reconstruction and identification method and system
CN104486331A (en) * 2014-12-11 2015-04-01 上海元趣信息技术有限公司 Multimedia file processing method, client terminals and interaction system
CN104835507B (en) * 2015-03-30 2018-01-16 渤海大学 A kind of fusion of multi-mode emotion information and recognition methods gone here and there and combined
CN104700094B (en) * 2015-03-31 2016-10-26 江苏久祥汽车电器集团有限公司 A kind of face identification method for intelligent robot and system
CN107103269A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 One kind expression feedback method and intelligent robot
CN105843118B (en) * 2016-03-25 2018-07-27 北京光年无限科技有限公司 A kind of robot interactive method and robot system
CN105938543A (en) * 2016-03-30 2016-09-14 乐视控股(北京)有限公司 Addiction-prevention-based terminal operation control method, device, and system
CN105843068A (en) * 2016-06-02 2016-08-10 安徽声讯信息技术有限公司 Emotion robot-based smart home environment collaborative control system
CN106407882A (en) * 2016-07-26 2017-02-15 河源市勇艺达科技股份有限公司 Method and apparatus for realizing head rotation of robot by face detection
CN106346475A (en) * 2016-11-01 2017-01-25 上海木爷机器人技术有限公司 Robot and robot control method
CN106778497A (en) * 2016-11-12 2017-05-31 上海任道信息科技有限公司 A kind of intelligence endowment nurse method and system based on comprehensive detection
CN106778861A (en) * 2016-12-12 2017-05-31 齐鲁工业大学 A kind of screening technique of key feature
CN107085717A (en) * 2017-05-24 2017-08-22 努比亚技术有限公司 A kind of family's monitoring method, service end and computer-readable recording medium
CN108656130A (en) * 2018-05-31 2018-10-16 芜湖星途机器人科技有限公司 Automatically robot people is looked for
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
WO2020057570A1 (en) * 2018-09-18 2020-03-26 AI Gaspar Limited System and process for identification and illumination of anatomical sites of a person and articles at such sites

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186774A (en) * 2013-03-21 2013-07-03 北京工业大学 Semi-supervised learning-based multi-gesture facial expression recognition method
CN103268156A (en) * 2013-05-24 2013-08-28 徐州医学院 Wrist-strap-type gesture recognition device based on human-computer interaction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101786272A (en) * 2010-01-05 2010-07-28 深圳先进技术研究院 Multisensory robot used for family intelligent monitoring service
CN102566474A (en) * 2012-03-12 2012-07-11 上海大学 Interaction system and method for robot with humanoid facial expressions, and face detection and tracking method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186774A (en) * 2013-03-21 2013-07-03 北京工业大学 Semi-supervised learning-based multi-gesture facial expression recognition method
CN103268156A (en) * 2013-05-24 2013-08-28 徐州医学院 Wrist-strap-type gesture recognition device based on human-computer interaction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
人脸表情识别及其在视频分类与推荐中的应用;赵思成;《中国优秀硕士学位论文全文数据库》;20120106;第13-23页 *
基于PCA特征提取和距离哈希K近邻分类的人脸表情识别;蔡则苏等;《智能计算机与应用》;20120229;第2卷(第1期);第1-3页 *

Also Published As

Publication number Publication date
CN103679203A (en) 2014-03-26

Similar Documents

Publication Publication Date Title
Tapu et al. A smartphone-based obstacle detection and classification system for assisting visually impaired people
Geiger et al. 3d traffic scene understanding from movable platforms
CN104589356B (en) The Dextrous Hand remote operating control method caught based on Kinect human hand movement
US20150339589A1 (en) Apparatus and methods for training robots utilizing gaze-based saliency maps
CN105144202B (en) Robot behavior is adjusted based on mankind's robot interactive
EP1343115B1 (en) Robot apparatus; face recognition method; and face recognition apparatus
Piana et al. Real-time automatic emotion recognition from body gestures
Chen et al. Brain-inspired cognitive model with attention for self-driving cars
Gu et al. Human gesture recognition through a kinect sensor
Jalal et al. Ridge body parts features for human pose estimation and recognition from RGB-D video data
CN104950887B (en) Conveying arrangement based on robotic vision system and independent tracking system
Faria et al. Extracting data from human manipulation of objects towards improving autonomous robotic grasping
CN102854983B (en) A kind of man-machine interaction method based on gesture identification
CN105342769A (en) Intelligent electric wheelchair
EP3563986A1 (en) Robot, server and man-machine interaction method
Mittal A Survey on optimized implementation of deep learning models on the NVIDIA Jetson platform
Li et al. A web-based sign language translator using 3d video processing
CN104083258B (en) A kind of method for controlling intelligent wheelchair based on brain-computer interface and automatic Pilot technology
CN105769120A (en) Fatigue driving detection method and device
US20050240412A1 (en) Robot behavior control system and method, and robot apparatus
CN1304931C (en) Head carried stereo vision hand gesture identifying device
Gomez-Donoso et al. Lonchanet: A sliced-based cnn architecture for real-time 3d object recognition
CN105809144B (en) A kind of gesture recognition system and method using movement cutting
US20180186452A1 (en) Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation
CN106027896A (en) Video photographing control device and method, and unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model