CN110348344B - Special facial expression recognition method based on two-dimensional and three-dimensional fusion - Google Patents

Special facial expression recognition method based on two-dimensional and three-dimensional fusion Download PDF

Info

Publication number
CN110348344B
CN110348344B CN201910571295.0A CN201910571295A CN110348344B CN 110348344 B CN110348344 B CN 110348344B CN 201910571295 A CN201910571295 A CN 201910571295A CN 110348344 B CN110348344 B CN 110348344B
Authority
CN
China
Prior art keywords
dimensional
data
camera
recognition
dimensional data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910571295.0A
Other languages
Chinese (zh)
Other versions
CN110348344A (en
Inventor
林斌
陈君楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910571295.0A priority Critical patent/CN110348344B/en
Publication of CN110348344A publication Critical patent/CN110348344A/en
Application granted granted Critical
Publication of CN110348344B publication Critical patent/CN110348344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a special facial expression recognition method based on two-dimensional and three-dimensional fusion, which mainly comprises a left camera, a right camera, a projector and a chessboard calibration plate, wherein the left camera and the right camera are positioned at the left side and the right side of the device, the projector is positioned at the middle point of the left camera and the right camera, the chessboard calibration plate is positioned at the common visual field in front of the two cameras, a method for carrying out regional division on a depth map is provided during the processing of three-dimensional data, an effective region is extracted to be used as input data, the effective region can be used for reducing the data calculation amount and simultaneously not having too large influence on the recognition rate, the recognition results of the two-dimensional data and the three-dimensional data are fused, the function is to improve the recognition rate when the existing two-dimensional data is used alone, and simultaneously, the influence of environmental factors such as makeup appearance, light rays and the like on the expression recognition is reduced.

Description

Special facial expression recognition method based on two-dimensional and three-dimensional fusion
Technical Field
The invention relates to a facial expression recognition method, in particular to a special facial expression recognition method based on two-dimensional and three-dimensional fusion, which is a method for assisting two-dimensional information to recognize facial expressions after extracting three-dimensional information, and comprises a two-dimensional and three-dimensional data acquisition method.
Background
The facial expression recognition technology is a technology for recognizing expressions, mainly analyzes and processes facial geometric and texture information, most objects of a traditional expression recognition algorithm face a two-dimensional image, the processed objects of the technology are color or gray two-dimensional images, and the texture information of a face is used for recognition. The two-dimensional image is essentially bright and dark information of a face at a certain angle under a certain lighting condition, expression recognition is carried out by calculating characteristic points of the face and the relation between the characteristic points through an algorithm, and the method is easily influenced by lighting conditions, makeup transformation and the like. Therefore, although two-dimensional face recognition can achieve good recognition rate in many situations, there still exist some shortcomings that cannot be ignored.
With the development of science and technology, data acquisition systems of three-dimensional imaging technology are also continuously perfected and mature, so that more and more researchers participate in the research of three-dimensional face recognition technology. The processing object of the three-dimensional face recognition is the three-dimensional point cloud data of the human face, and the three-dimensional structure information of the processing object is matched and recognized with the three-dimensional face information in the database. Therefore, compared with a two-dimensional facial expression recognition technology, the three-dimensional facial expression recognition technology has the following advantages: (1) the three-dimensional data is richer and more concrete than the two-dimensional data, and has more detailed information, so that the accuracy can be improved; (2) the three-dimensional data obtained by collection represents the three-dimensional shape characteristics of the human face, and is not easily influenced by illumination, makeup and the like, so that the three-dimensional data is wider and more flexible in application range than two-dimensional human face recognition.
Although the three-dimensional identification technology has the advantages of richer information content, stronger anti-interference capability and the like, the three-dimensional identification technology also has the following defects: (1) and (3) acquiring data: due to the influence of the three-dimensional acquisition technology, the accurate acquisition of three-dimensional data has certain difficulty; (2) and (3) processing data: the algorithm of three-dimensional data processing is complex, the calculation amount is larger than that of two-dimensional data processing, and therefore the hardware requirement on a computer is larger.
Disclosure of Invention
The invention provides a fusion face recognition method considering texture information and shape information in combination aiming at the problems in the prior art, and provides a technical scheme for extracting three-dimensional information and assisting two-dimensional expression recognition by utilizing the advantages of two-dimensional information and three-dimensional information and overcoming the defects of the two-dimensional information and the three-dimensional information, and simultaneously provides a method for acquiring two-dimensional three-dimensional face image information and a method for performing fusion recognition subsequently.
The invention is realized by the following technical scheme:
the invention discloses a special facial expression recognition method based on two-dimensional and three-dimensional fusion of binocular structured light, which mainly comprises a left camera, a right camera, a projector and a chessboard calibration plate, wherein the left camera and the right camera are positioned at the left side and the right side of the device, the projector is positioned at the midpoint of the left camera and the right camera, the chessboard calibration plate is positioned at the common visual field in front of the two cameras, and the specific steps are as follows:
1) under the condition that the position and the angle of the chessboard calibration plate are changed, 15-25 groups of pictures are shot by the left camera and the right camera respectively and are stored in sequence;
2) monocular calibration: performing monocular calibration by using the picture shot by the left camera and the actual size of the chessboard to obtain the focal length and distortion parameters of the left camera, and the right camera is similar;
3) binocular calibration: after respective calibration parameters of the left camera and the right camera in the step 2), calculating the position relation and the mapping function between the cameras by using the pictures corresponding to each other, and then calculating the mapping transformation of distortion correction and stereo correction;
4) shooting a human face: in a dark environment, structured light is projected onto a detected face through a projector, and a face picture is shot on the left and the right respectively;
5) two-dimensional expression recognition: utilizing two-dimensional pictures, firstly carrying out histogram equalization and filtering denoising treatment on the two-dimensional pictures; then, face normalization processing is carried out, including gray level normalization and geometric normalization, wherein a gray level normalization formula is as follows:
x=x/(max+0.0001) (1)
x is the gray value of each pixel point in the picture, and max is the value of the maximum gray value;
6) and (3) three-dimensional data calculation: performing stereo matching by using the left and right images and binocular calibration parameters, and calculating point cloud data carrying three-dimensional information;
7) and (3) three-dimensional expression recognition: firstly, utilizing rasterization processing to convert disordered point cloud data into regular grid data after interpolation, wherein the specific interpolation method is a lattice point spline function method and then is stored as a depth information map; extracting characteristic points including nose tip points, eye corner points and mouth corner points; then, carrying out region division on the depth map by using the characteristic points, selecting out eye and mouth parts, and recombining the eye and mouth parts into gray scale map data of the size of input data required by a subsequent training network; finally, performing expression recognition on the obtained data by using the trained model;
8) after the step 1-3 is completed, continuously repeating the step 4-7, obtaining enough two-dimensional and three-dimensional databases, and learning by utilizing a network training model built by the user;
9) the identification process comprises the following steps: and repeating the steps 4-7, respectively putting the obtained data into two trained models for recognition to obtain probability distribution of each expression under two-dimensional and three-dimensional recognition, adding the probability distribution, and selecting the maximum probability value as a final recognition result.
As a further improvement, the geometric normalization of the invention comprises the following specific steps:
step one, finding out feature points, wherein eyes and nose tips are taken as three feature points to obtain a distance d between the eyes and a midpoint O between the eyes;
secondly, carrying out rotation transformation on the picture by using the coordinate values of the two eyes to enable the direction of the face to be positive;
thirdly, determining a cutting area of the required rectangular data according to the characteristic points of the face, such as eyes, mouth and the like, and taking O as an original point to cut reasonable ranges including enough face information up, down, left and right;
fourthly, carrying out scale change on the intercepted expression area image to normalize the expression area image into pictures with the same size;
and finally, carrying out expression recognition on the gray value matrix obtained after the preprocessing by using the trained model.
As a further improvement, the acquisition of the training data is divided into the acquisition of two-dimensional data and the acquisition of three-dimensional data, and the two-dimensional data can be stored into a two-dimensional database after being preprocessed in the step 5); and (3) after the three-dimensional data is preprocessed in the step 7), storing the three-dimensional data into a three-dimensional database, performing learning training on the three-dimensional data respectively when a database sample is large enough to obtain trained models, and performing fusion after calculating respective probability distribution by using the two models respectively in the subsequent identification.
The invention has the following beneficial effects:
in order to solve the problems that two-dimensional recognition is affected by environmental factors such as illumination, makeup and the like, and three-dimensional recognition processing is difficult, a method for recognizing expressions by using partial three-dimensional data to assist a two-dimensional image is provided, and a data acquisition method and a preprocessing method of the two-dimensional image and three-dimensional point cloud are provided. The two-dimensional database and the three-dimensional database are trained and fused under the independently built training model, the recognition rate of the facial expression recognition by independently using the two-dimensional image can be improved, especially under two expressions of 'fear' and 'anger', the two-dimensional recognition accuracy is lower than that of other expressions, and the recognition rate can be effectively improved after the three-dimensional data is combined. Meanwhile, after the three-dimensional data is extracted, due to the fact that regionalization fusion is carried out, the subsequent processing difficulty and the calculated amount are not high. Because the initialized data specifications are consistent, the same set of network training model can be used for three-dimensional and two-dimensional training.
The method for carrying out region division on the depth map is provided during the processing of the three-dimensional data, an effective region is extracted to be used as input data, the effect of the method is that the data calculation amount can be reduced, and meanwhile, the recognition rate cannot be greatly influenced; the recognition results of the two-dimensional data and the three-dimensional data are fused, so that the recognition rate of the existing method for singly using the two-dimensional data is improved, and the influence of environmental factors such as makeup, light and the like on the expression recognition is reduced.
Drawings
FIG. 1 is a schematic diagram of the structure of binocular structured light to acquire two-dimensional and three-dimensional data in accordance with the present invention;
1 is left camera, 2 is right camera, 3 is the projecting apparatus, 4 is the chess board calibration board, and 5 is the people face of being surveyed.
Detailed Description
The invention aims to provide a method for extracting three-dimensional information and assisting two-dimensional expression recognition under external factors such as makeup, illumination and the like aiming at the condition that the existing two-dimensional expression recognition interferes with expression recognition under the condition of difficult three-dimensional data acquisition and processing, and simultaneously provides a method for acquiring two-dimensional three-dimensional face image information, the device for realizing the method comprises a left camera 1, a right camera 2, a projector 3 and a chessboard scaling plate 4, wherein the left camera 1 and the right camera 2 are separated by 15cm, the projector 3 is positioned at the midpoint of the left camera and the right camera, the chessboard scaling plate 4 is positioned at the same visual field in front of the two cameras, namely the position shown in the figure, and is photographed by the left camera and the right camera 2, the position (up, down, left and right movement) and the angle of a chessboard are changed after each group of pictures is recorded, the next group of pictures are continuously photographed, and the pictures of 20 chessboard scaling plates 4 are saved, for subsequent calibration.
The specific implementation method of the invention is as follows:
1) the left camera 2 and the right camera 2 respectively take pictures of a chessboard calibration plate 4 with a known size, structured light in the projector 3 needs to be projected onto the chessboard calibration plate 4 when taking pictures, one group of pictures is taken and stored each time, then the angle and the position of the chessboard calibration plate 4 are changed, and the next group of pictures is continuously taken, and 20 groups of pictures are taken together;
2) monocular calibration: performing monocular calibration by using the picture shot by the left camera 1 and the actual size of the chessboard to obtain parameters such as the focal length and distortion of the left camera 1, and the right camera 2 is similar;
3) binocular calibration: after respective calibration parameters of the left camera 2 and the right camera 2 in the step 2), calculating the position relation and the mapping function between the cameras by using the pictures corresponding to each other, and then calculating the mapping transformation of distortion correction and stereo correction;
4) shooting a human face: in a dark environment, structured light is projected onto a detected face 5, and a face picture is shot on the left and right respectively;
5) two-dimensional data preprocessing: utilizing two-dimensional pictures, firstly carrying out histogram equalization and filtering denoising treatment on the two-dimensional pictures; then, face normalization processing is carried out, including gray level normalization and geometric normalization, wherein a gray level normalization formula is as follows:
x=x/(max+0.0001) (2)
x is the gray value of each pixel point in the picture, and max is the value of the maximum gray value.
The geometric normalization comprises the following steps:
step one, finding out feature points, wherein eyes and nose tips are taken as three feature points to obtain a distance d between the eyes and a midpoint O between the eyes; secondly, carrying out rotation transformation on the picture by using the coordinate values of the two eyes to enable the direction of the face to be positive; thirdly, determining a cutting area of the required rectangular data according to the characteristic points of the two eyes, the mouth and the like of the face, taking O as an origin, cutting the length of d from the left and the right, cutting 0.5d upwards, and cutting 1.5d downwards; and fourthly, carrying out scale change on the intercepted expression area image to normalize the expression area image into pictures with the same size. And finally, obtaining a 48 x 48 gray value matrix after preprocessing.
6) And (3) three-dimensional data calculation: and (3) performing stereo matching by using the left and right images and binocular calibration parameters to calculate point cloud data carrying three-dimensional information.
7) Three-dimensional data preprocessing: firstly, utilizing rasterization processing to convert disordered point cloud data into regular grid data after interpolation, wherein the specific interpolation method is a lattice point spline function method and then is stored as a depth information map; extracting characteristic points including nose tip points, eye corner points and mouth corner points; and then, carrying out region division on the depth map by using the characteristic points, selecting eye and mouth parts, and recombining the eye and mouth parts into 48 × 48 data.
8) And (4) after the steps 1-3 are finished, continuously repeating the steps 4-7, obtaining enough two-dimensional and three-dimensional databases, and learning by utilizing the self-built network training model.
9) The identification process comprises the following steps: and repeating the steps 4-7, respectively putting the obtained data into two trained models for recognition to obtain probability distribution of each expression under two-dimensional and three-dimensional recognition, adding the probability distribution, and selecting the maximum probability value as a final recognition result.
The foregoing description is not intended to limit the present invention, and it should be noted that various changes, modifications, additions and substitutions may be made by those skilled in the art without departing from the spirit and scope of the present invention, and such changes and modifications should be construed as within the scope of the present invention.

Claims (3)

1. The special facial expression recognition method based on two-dimensional and three-dimensional fusion of binocular structured light is characterized in that a device for realizing the method mainly comprises a left camera (1), a right camera (2), a projector (3) and a chessboard calibration plate (4), wherein the left camera (1) and the right camera (2) are positioned on the left side and the right side of the device, the projector (3) is positioned at the midpoint of the left camera (1) and the right camera (2), and the chessboard calibration plate (4) is positioned at the common visual field in front of the two cameras, and specifically comprises the following steps:
1) under the condition that the position and the angle of the chessboard calibration plate (4) are changed, 15-25 groups of pictures are shot by the left camera (1) and the right camera (2) respectively and are stored in sequence;
2) monocular calibration: performing monocular calibration by using the picture shot by the left camera (1) and the actual size of the chessboard to obtain the focal length and distortion parameters of the left camera (1), and the right camera (2) is similar;
3) binocular calibration: after respective calibration parameters of the left camera (2) and the right camera (2) in the step 2), calculating the position relation and the mapping function between the cameras by using two corresponding pictures, and then calculating the mapping transformation of distortion correction and stereo correction;
4) shooting a human face: in a dark environment, structured light is projected onto a detected face (5), and a face picture is shot at the left and the right respectively;
5) two-dimensional expression recognition: utilizing two-dimensional pictures, firstly carrying out histogram equalization and filtering denoising treatment on the two-dimensional pictures; then, face normalization processing is carried out, including gray level normalization and geometric normalization, wherein a gray level normalization formula is as follows:
x_new=x/(max+0.0001) (1)
x _ new is the gray value of each pixel point in the processed picture, x is the gray value of each pixel point in the original picture, and max is the value of the maximum gray value;
6) and (3) three-dimensional data calculation: performing stereo matching by using the left and right images and binocular calibration parameters, and calculating point cloud data carrying three-dimensional information;
7) and (3) three-dimensional expression recognition: firstly, utilizing rasterization processing to convert disordered point cloud data into regular grid data after interpolation, wherein the specific interpolation method is a lattice point spline function method and then is stored as a depth information map; extracting characteristic points including nose tip points, eye corner points and mouth corner points; then, carrying out region division on the depth map by using the characteristic points, selecting out eye and mouth parts, and recombining the eye and mouth parts into gray scale map data of the size of input data required by a subsequent training network; finally, performing expression recognition on the obtained data by using the trained model;
8) after the steps 1) to 3) are finished, continuously repeating the steps 4) to 7), wherein the graph after normalization in the step 5) is stored in a two-dimensional database, and the point cloud data in the step 6) is stored in a three-dimensional database until enough two-dimensional and three-dimensional databases are obtained, and then learning is carried out by utilizing a network training model built by the user;
9) the identification process comprises the following steps: and repeating the steps 4) -7), obtaining the two-dimensional data of the step 5) and the three-dimensional data of the step 6), respectively putting the two-dimensional data and the three-dimensional data into two trained models for recognition, adding probability distribution of each expression under two-dimensional and three-dimensional recognition, and selecting the recognition result with the maximum probability value as a final recognition result.
2. The binocular structured light-based two-dimensional and three-dimensional fusion special facial expression recognition method according to claim 1, wherein the geometric normalization specifically comprises the steps of:
step one, finding out feature points, and taking eyes and a nose tip as three feature points to obtain an eye distance d and an eye midpoint O;
secondly, carrying out rotation transformation on the picture by using the coordinate values of the two eyes to enable the direction of the face to be positive;
thirdly, determining a cutting area of the required rectangular data according to the characteristic points of the eyes and the mouth of the face, and taking O as an origin to cut reasonable ranges including enough face information from the upper part, the lower part, the left part and the right part;
fourthly, carrying out scale change on the intercepted expression area image to normalize the expression area image into pictures with the same size;
and finally, carrying out expression recognition on the gray value matrix obtained after the preprocessing by using the trained model.
3. The binocular structured light-based two-dimensional and three-dimensional fusion special facial expression recognition method according to claim 1 or 2, wherein the acquisition of training data is divided into the acquisition of two-dimensional data and three-dimensional data, and the two-dimensional data can be stored in a two-dimensional database after being subjected to the preprocessing in the step 5); and (3) after the three-dimensional data is preprocessed in the step 7), storing the three-dimensional data into a three-dimensional database, performing learning training on the three-dimensional data respectively when a database sample is large enough to obtain trained models, and performing fusion after calculating respective probability distribution by using the two models respectively in the subsequent identification.
CN201910571295.0A 2019-06-28 2019-06-28 Special facial expression recognition method based on two-dimensional and three-dimensional fusion Active CN110348344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910571295.0A CN110348344B (en) 2019-06-28 2019-06-28 Special facial expression recognition method based on two-dimensional and three-dimensional fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910571295.0A CN110348344B (en) 2019-06-28 2019-06-28 Special facial expression recognition method based on two-dimensional and three-dimensional fusion

Publications (2)

Publication Number Publication Date
CN110348344A CN110348344A (en) 2019-10-18
CN110348344B true CN110348344B (en) 2021-07-27

Family

ID=68177246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910571295.0A Active CN110348344B (en) 2019-06-28 2019-06-28 Special facial expression recognition method based on two-dimensional and three-dimensional fusion

Country Status (1)

Country Link
CN (1) CN110348344B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191609A (en) * 2019-12-31 2020-05-22 上海能塔智能科技有限公司 Face emotion recognition method and device, electronic equipment and storage medium
CN112163552A (en) * 2020-10-14 2021-01-01 北京达佳互联信息技术有限公司 Labeling method and device for key points of nose, electronic equipment and storage medium
CN113780141A (en) * 2021-08-31 2021-12-10 Oook(北京)教育科技有限责任公司 Method and device for constructing playing model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678235A (en) * 2015-12-30 2016-06-15 北京工业大学 Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN106909873A (en) * 2016-06-21 2017-06-30 湖南拓视觉信息技术有限公司 The method and apparatus of recognition of face
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN109003308A (en) * 2018-06-27 2018-12-14 浙江大学 A kind of special areas imaging camera calibration system and method based on phase code

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298995B (en) * 2014-05-06 2017-08-08 深圳市唯特视科技有限公司 Three-dimensional face identifying device and method based on three-dimensional point cloud
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
CN109886091B (en) * 2019-01-08 2021-06-01 东南大学 Three-dimensional facial expression recognition method based on weighted local rotation mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678235A (en) * 2015-12-30 2016-06-15 北京工业大学 Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN106909873A (en) * 2016-06-21 2017-06-30 湖南拓视觉信息技术有限公司 The method and apparatus of recognition of face
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN109003308A (en) * 2018-06-27 2018-12-14 浙江大学 A kind of special areas imaging camera calibration system and method based on phase code

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"A novel approach to classification of facial expressions from 3D-mesh data-sets using modifie PCA";Venkatesh Y 等;《Pattern Recognition Letters》;20091231;第30卷(第12期);第1128-1137页 *

Also Published As

Publication number Publication date
CN110348344A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN108564041B (en) Face detection and restoration method based on RGBD camera
CN110348344B (en) Special facial expression recognition method based on two-dimensional and three-dimensional fusion
CN106874871B (en) Living body face double-camera identification method and identification device
CN109344813B (en) RGBD-based target identification and scene modeling method
CN107067429A (en) Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN111932678B (en) Multi-view real-time human motion, gesture, expression and texture reconstruction system
EP3905104B1 (en) Living body detection method and device
CN104598915A (en) Gesture recognition method and gesture recognition device
CN109889799B (en) Monocular structure light depth perception method and device based on RGBIR camera
CN110189294A (en) RGB-D image significance detection method based on depth Analysis on confidence
CN110135277B (en) Human behavior recognition method based on convolutional neural network
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN104036541A (en) Fast three-dimensional reconstruction method in vision measurement
CN109285183A (en) A kind of multimode video image method for registering based on moving region image definition
CN110909571B (en) High-precision face recognition space positioning method
CN111881841B (en) Face detection and recognition method based on binocular vision
CN103533332A (en) Image processing method for converting 2D video into 3D video
CN116503567B (en) Intelligent modeling management system based on AI big data
CN110852335B (en) Target tracking system based on multi-color feature fusion and depth network
CN109509194B (en) Front human body image segmentation method and device under complex background
CN117218192A (en) Weak texture object pose estimation method based on deep learning and synthetic data
CN106909880A (en) Facial image preprocess method in recognition of face
CN116182736A (en) Automatic detection device and detection method for parameters of sheep three-dimensional body ruler based on double-view depth camera
CN111160278B (en) Face texture structure data acquisition method based on single image sensor
CN106971190A (en) Sexual discriminating method based on human somatotype

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant