CN110059537A - A kind of three-dimensional face data acquisition methods and device based on Kinect sensor - Google Patents

A kind of three-dimensional face data acquisition methods and device based on Kinect sensor Download PDF

Info

Publication number
CN110059537A
CN110059537A CN201910145985.XA CN201910145985A CN110059537A CN 110059537 A CN110059537 A CN 110059537A CN 201910145985 A CN201910145985 A CN 201910145985A CN 110059537 A CN110059537 A CN 110059537A
Authority
CN
China
Prior art keywords
face
rgb
data
coordinate system
dimensional point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910145985.XA
Other languages
Chinese (zh)
Inventor
骞志彦
王国强
张斌
陈学伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sight Margin (shanghai) Intelligent Technology Co Ltd
Original Assignee
Sight Margin (shanghai) Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sight Margin (shanghai) Intelligent Technology Co Ltd filed Critical Sight Margin (shanghai) Intelligent Technology Co Ltd
Priority to CN201910145985.XA priority Critical patent/CN110059537A/en
Publication of CN110059537A publication Critical patent/CN110059537A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention provides a kind of three-dimensional face data acquisition methods and device based on Kinect sensor, this method comprises: obtaining the rgb image data and depth image data of face;Space is carried out to the depth image data and is converted to face three-dimensional point coordinate data in depth camera coordinate system;Face three-dimensional point coordinate data in RGB camera coordinate system will be obtained in the depth camera coordinate system in face three-dimensional point coordinate data projection to RGB camera coordinate system;The RGB color information MAP of the rgb image data is obtained into colored three-dimensional point cloud in face three-dimensional point coordinate data into the RGB camera coordinate system.The present invention provides a kind of three-dimensional face data acquisition methods and device based on Kinect sensor, the 3 dimension data quality for solving existing Kinect sensor capture are relatively low, in the presence of the data of " blindness point " missing, depth resolution is relatively low, the big problem of noise.

Description

A kind of three-dimensional face data acquisition methods and device based on Kinect sensor
Technical field
The invention belongs to technical field of face recognition more particularly to a kind of three-dimensional face datas based on Kinect sensor Acquisition methods and device.
Background technique
Face database is the standard platform that quantitative assessment is carried out to different faces recognizer in recognition of face, is out The basis for sending out face identification system solid and reliable.
Compared with a large amount of two-dimension human face database, the quantity of three-dimensional face database is relatively fewer.It is most of existing Face database carries out human face data acquisition using the laser scanner of high quality, so that the two-dimension human face data and three-dimensional of acquisition There are imbalance problems on obtaining efficiency and data precision for human face data;In addition the time of the capture of high resolution R GB image Much smaller than the laser scanning to face, in order to significantly reduce speed of the noncooperative 2 dimension face acquisition in integrated 3 dimension human face data 3 dimension human face scannings of degree, high quality need careful user to cooperate.And Kinect sensor passes through while providing 2 peacekeeping, 3 dimension According to rate of interaction, overcome the above problem.But 3 dimension data quality of Kinect sensor capture are relatively low, exist " blind The data of mesh point " missing, depth resolution is relatively low, a large amount of depth conversions, spatial calibration/mapping of RGB and depth image When noise the problems such as.In addition, any one existing three-dimensional face database is all without providing three-dimensional video sequence, this be because 3 dimension datas can not be obtained in real time for 3 traditional dimension laser scanners, and the shortage of 3 dimension video datas is limited based on three-dimensional figure The three-dimensional face identification method of picture.
Summary of the invention
The present invention provides a kind of three-dimensional face data acquisition methods and device based on Kinect sensor, existing to solve 3 dimension data quality of Kinect sensor capture are relatively low, there are the data of " blindness point " missing, depth resolution is relatively It is low, the big problem of noise.
In order to solve the above technical problems, the present invention provides a kind of, the three-dimensional face data based on Kinect sensor is obtained Method, comprising:
Obtain the rgb image data and depth image data of face;
Space is carried out to the depth image data and is converted to face three-dimensional point coordinate data in depth camera coordinate system;
RGB will be obtained in face three-dimensional point coordinate data projection to RGB camera coordinate system in the depth camera coordinate system Face three-dimensional point coordinate data in camera coordinates system;
By the RGB color information MAP of the rgb image data into the RGB camera coordinate system face three-dimensional point coordinate Colored three-dimensional point cloud is obtained in data.
According to an embodiment of the present invention, the step of rgb image data and depth image data for obtaining face Include:
The rgb image data is obtained using RGB camera;
The depth image data is obtained using depth camera.
Another embodiment according to the present invention, described the step of obtaining the depth image data using depth camera, wrap It includes:
The disparity map of face is obtained using depth camera;
The depth image is calculated according to the disparity map of the face based on the triangulation of Kinect sensor Data.
Another embodiment according to the present invention, it is described by face three-dimensional point coordinate data in the depth camera coordinate system Projecting the step of face three-dimensional point coordinate data in RGB camera coordinate system are obtained on RGB camera coordinate system includes:
Face three-dimensional point coordinate data in the depth camera coordinate system are transformed into and are obtained on RGB camera coordinate system Face three-dimensional point coordinate data in RGB camera coordinate system;
Distortion correction is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Face three-dimensional point coordinate data in RGB camera coordinate system described after correction are mapped to RGB image origin and obtain RGB The position data of face three-dimensional point in camera coordinates system.
Another embodiment according to the present invention, the RGB color information MAP by the rgb image data is described in The step of obtaining colored three-dimensional point cloud in face three-dimensional point coordinate data in RGB camera coordinate system include:
Determine the corresponding relationship of the rgb image data and the depth image data;
According to the corresponding relationship by the RGB color information MAP of the rgb image data to the RGB camera coordinate system Colored three-dimensional point cloud is obtained in middle face three-dimensional point coordinate data;
Record the colored three-dimensional point cloud.
Another embodiment according to the present invention, the determination rgb image data and the depth image data The step of corresponding relationship includes:
Noise remove is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Facial key point label is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Determine the facial key point of face three-dimensional point coordinate data and the depth image in the RGB camera coordinate system Corresponding relationship between three-dimensional point.
On the other hand, the present invention also provides a kind of three-dimensional face data acquisition device based on Kinect sensor, packet It includes:
Module is obtained, for obtaining the rgb image data and depth image data of face;
Space conversion module is converted to people in depth camera coordinate system for carrying out space to the depth image data Face three-dimensional point coordinate data;
Projection module, for sitting face three-dimensional point coordinate data projection in the depth camera coordinate system to RGB camera Mark fastens to obtain face three-dimensional point coordinate data in RGB camera coordinate system;
Color mapping module, for by the RGB color information MAP of the rgb image data to the RGB camera coordinate Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data in system.
According to an embodiment of the present invention, the acquisition module is Kinect sensor, the Kinect sensor packet It includes:
RGB camera unit, for obtaining the rgb image data;
Depth camera unit, for obtaining the depth image data.
Wherein, the depth camera unit includes: IR Infrared laser emission device and IR camera, the IR Infrared laser emission For device for projecting default speckle patterns into scene, the IR camera is used to shoot the reflection of the default speckle patterns of projection Image is to obtain the depth image data.
Another embodiment according to the present invention, the projection module include:
Converting unit is sat for face three-dimensional point coordinate data in the depth camera coordinate system to be transformed into RGB camera Mark face three-dimensional point coordinate data in the RGB camera coordinate system fastened;
Unit is corrected, carries out distortion correction with to face three-dimensional point coordinate data in the RGB camera coordinate system;
Coordinate map unit is mapped to for face three-dimensional point coordinate data in the RGB camera coordinate system after correcting RGB image origin obtains the position data of face three-dimensional point in RGB camera coordinate system.
Another embodiment according to the present invention, the color mapping module include:
Corresponding relationship determination unit, for determining the corresponding relationship of the rgb image data and the depth image data;
Color map unit, for being arrived the RGB color information MAP of the rgb image data according to the corresponding relationship Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data in the RGB camera coordinate system;
Recording unit, for recording the colored three-dimensional point cloud.
Beneficial effects of the present invention:
A kind of three-dimensional face data acquisition methods based on Kinect sensor of the embodiment of the present invention, first acquisition face Rgb image data and depth image data;Space is carried out to the depth image data later and is converted to depth camera seat Face three-dimensional point coordinate data in mark system;Again by face three-dimensional point coordinate data projection in the depth camera coordinate system to RGB Camera coordinates fasten to obtain face three-dimensional point coordinate data in RGB camera coordinate system;Finally by the RGB of the rgb image data Colouring information is mapped in the RGB camera coordinate system and obtains colored three-dimensional point cloud in face three-dimensional point coordinate data.It adopts With the three-dimensional data matter for solving Kinect capture based on Kinect sensor three-dimensional face data acquisition methods of the present embodiment Measure relatively low, the data of " blindness point " missing, the problems such as depth resolution is relatively low, and noise is big, using the present embodiment The available three-dimensional face data met the requirements of acquisition methods, and then three-dimensional face database quantity is supplemented, it solves due to phase To less three-dimensional face database can not effective exploitation solid and reliable face identification system the problem of.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings Obtain other attached drawings.
Fig. 1 is a kind of one embodiment of three-dimensional face data acquisition methods based on Kinect sensor of the invention Flow diagram;
Fig. 2 is the one of the step 100 of a kind of three-dimensional face data acquisition methods based on Kinect sensor of the invention The flow diagram of a embodiment;
Fig. 3 is the one of the step 300 of a kind of three-dimensional face data acquisition methods based on Kinect sensor of the invention The flow diagram of a embodiment;
Fig. 4 is the one of the step 400 of a kind of three-dimensional face data acquisition methods based on Kinect sensor of the invention The flow diagram of a embodiment;
Fig. 5 is a kind of one embodiment of three-dimensional face data acquisition device based on Kinect sensor of the invention Structural schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Shown in Figure 1, the embodiment of the invention provides a kind of, and the three-dimensional face data based on Kinect sensor obtains Method, comprising:
Step 100: obtaining the rgb image data and depth image data of face;
Usually need to establish a controlled indoor shot environment before obtaining image data: we can pass Kinect Sensor is stably mounted to the top of laptop, parallel to the ground;Gathered person need to be in the 0.7m before Kinect sensor To the position of 0.9m;A simple background, such as blank are placed behind each gathered person, are passed with Kinect The distance of sensor is 1.25m;LED light supplement lamp is set in face of gathered person, to provide different illuminance.When acquisition, adopted Collection person follows preset acquisition draft, which is included in slow head on horizontal (yaw) and vertical (pitching) direction Movement.Kinect sensor is according to predefined database (library OpenNI) structure automatic capture, processing and tissue gathered person Face, be used for data-base recording.Wherein, RGB image indicates are as follows: IRGB(x, y)={ vR,vG,vB, vR, vG, vBFor image position Set R, G, the value (x, y) of channel B;Depth image indicates are as follows: IDepth(x, y)=zworld, zworldIndicate the depth value of picture position (x,y)。
Step 200: space being carried out to the depth image data and is converted to face three-dimensional point in depth camera coordinate system Coordinate data;
Three-dimensional coordinate conversion, the three-dimensional coordinate (x of each point are carried out to depth image data in this stepworld,yworld, zworld) calculate it is as follows:
IDepth(x, y)=zworld (1)
Wherein, (x0,y0) be depth image main positions, and δ x, δ y represent the correction value of lens distortions, wherein δ x and δ What y estimated in advance, generally provided by equipment supplier.
Step 300: will be in the depth camera coordinate system in face three-dimensional point coordinate data projection to RGB camera coordinate system Obtain face three-dimensional point coordinate data in RGB camera coordinate system;
The RGB-D (depth) that face data is carried out in this step is compared, i.e. the alignment operation of RGB image and depth image: Three-dimensional coordinate based on depth camera is converted in the three-dimensional coordinate system that the RGB camera based on affine transformation defines.
Step 400: by the RGB color information MAP of the rgb image data into the RGB camera coordinate system face three Colored three-dimensional point cloud is obtained in dimension point coordinate data.
The corresponding relationship between RGB image and depth image is found, directly RGB color is mapped in three-dimensional point coordinate, Then three-dimensional point cloud of the record with corresponding color mapping.
The RGB that has been aligned and depth frame of the acquisition draft storage from RGB camera and depth camera finally can be used Video sequence.
In embodiments of the present invention, the data pattern of four seed types: 1) 2D RGB image is captured in acquisition;2) 2.5D is deep Degree figure;3) 3D point cloud;4) RGB-D video sequence, and devise a variety of changes in faces in two stages, comprising: it is poker-faced, it is micro- It laughs at, lips, Qiang Guang, sunglasses blocks, and hand blocks, and paper blocks, and the right side is blocked, and a left side is blocked, and all photos are all controlled Under the conditions of shoot, but the clothes to participant, makeup or hair style do not have any restrictions.In addition a draft has also been devised to remember Everyone RGB-D video sequence is recorded, which is included on horizontal (yaw) and vertical (pitching) direction slowly head fortune Dynamic, which allows to extract the frame (in addition to the left/right profile recorded in static image) of multiple and different postures, Ke Yiyong Test the robustness of two-dimensional/three-dimensional face recognition algorithms, and the recognition of face based on video can in this data set into Row research.
The embodiment of the present invention proposes a kind of three-dimensional face data acquisition methods based on Kinect sensor, first acquisition people The rgb image data and depth image data of face;Space is carried out to the depth image data later and is converted to depth camera Face three-dimensional point coordinate data in coordinate system;Face three-dimensional point coordinate data projection in the depth camera coordinate system is arrived again Face three-dimensional point coordinate data in RGB camera coordinate system are obtained on RGB camera coordinate system;Finally by the rgb image data RGB color information MAP obtains colored three-dimensional point cloud in face three-dimensional point coordinate data into the RGB camera coordinate system. Using the three-dimensional data for solving Kinect capture based on Kinect sensor three-dimensional face data acquisition methods of the present embodiment The problems such as quality is relatively low, the data of " blindness point " missing, depth resolution is relatively low, and noise is big, using the present embodiment The available three-dimensional face data met the requirements of acquisition methods, and then supplement three-dimensional face database quantity, solve due to Relatively small number of three-dimensional face database and can not effective exploitation solid and reliable face identification system the problem of.
As one for example, as shown in Fig. 2, the three-dimensional face number based on Kinect sensor of the embodiment of the present invention Include: according to the step 100 of acquisition methods
Step 101: obtaining the rgb image data using RGB camera;
Step 102: obtaining the depth image data using depth camera.
As one for example, as shown in Fig. 2, the three-dimensional face number based on Kinect sensor of the embodiment of the present invention Include: according to the step 102 of acquisition methods
Step 1021: the disparity map of face is obtained using depth camera;
Step 1022: being calculated based on the triangulation of Kinect sensor according to the disparity map of the face described Depth image data.
RGB camera Direct Acquisition RGB image I in the present embodimentRGB, and depth camera can be by IR Infrared laser emission device It is constituted with IR camera to obtain range information from scene.Using IR Infrared laser emission device by pre-designed speckle patterns It projects in the scene generated by the transillumination of grating, and captures the reflection of pattern with IR thermal camera.It then will capture Mode and reference model (between predefined coordinate system and known distance) be compared, to generate disparity map IDisparity, and Parallax value d is generated on each point.From obtained disparity map IDisparityIn, it is direct by a simple Triangulation Method Derive depth map IDepth.This triangulation for describing Kinect sensor is shown below:
Wherein, z is the distance between Kinect sensor and real world locations (i.e. depth, unit mm);D ' is logical Normalized difference value of the normalization original disparity value d between 0 and 2047 is crossed,
D=m × d'+n (5)
Wherein, m and n is normalized parameter;B and f is base length and focal length, Z respectively0It is Kinect sensor and predefined The distance between reference model generally comprises b, f and Z0Calibration parameter inside is estimated and is provided by equipment supplier.
As another for example, as shown in figure 3, the three-dimensional face based on Kinect sensor of the embodiment of the present invention The step 300 of data capture method includes:
Step 301: face three-dimensional point coordinate data in the depth camera coordinate system are transformed on RGB camera coordinate system Face three-dimensional point coordinate data in obtained RGB camera coordinate system;
Face three-dimensional point coordinate data in depth camera coordinate system are transformed into based on affine transformation first in this step The three-dimensional coordinate system that RGB camera defines, conversion formula are as follows:
Wherein, R ∈ R3×3It is spin matrix and T ∈ R3×1It is conversion vector.
Then according to the focal length f of RGB cameraRGB, the three-dimensional coordinate based on RGB camera is mapped to ideal undistorted RGB Camera coordinates are fastened, and conversion formula is as follows:
Step 302: distortion correction is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Step 303: face three-dimensional point coordinate data in RGB camera coordinate system described after correction are mapped to RGB image original Point obtains the position data of face three-dimensional point in RGB camera coordinate system.
Finally, it is also necessary to be distorted and be mapped to RGB image origin by correcting, restore three-dimensional point in RGB camera coordinate system Actual position (xRGB,yRGB):
Wherein, in D ∈ R3×3With V ∈ R3×3In, the calibration parameter of Kinect factory offer is provided.
As another for example, as shown in figure 4, the three-dimensional face based on Kinect sensor of the embodiment of the present invention The step 400 of data capture method includes:
Step 401: determining the corresponding relationship of the rgb image data and the depth image data;
Step 402: according to the corresponding relationship by the RGB color information MAP of the rgb image data to the RGB phase Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data in machine coordinate system;
Step 403: recording the colored three-dimensional point cloud.
The corresponding relationship between RGB image and depth image is found in the present embodiment, and RGB color is directly mapped to three-dimensional On coordinate, then record has the three-dimensional point cloud of corresponding color.
As another for example, the three-dimensional face data acquisition side based on Kinect sensor of the embodiment of the present invention The step 401 of method includes:
Noise remove is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Facial key point label is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Determine the facial key point of face three-dimensional point coordinate data and the depth image in the RGB camera coordinate system Corresponding relationship between three-dimensional point.
It needs to carry out noise remove to face three-dimensional point coordinate data in RGB camera coordinate system in the present embodiment, noise is gone Except method can be the point cloud data denoising method based on statistical analysis technique, the point cloud data based on least square method is smoothly square Method or Cloud Points Reduction method based on clustering method mark facial key point, and number using facial coordinate later According to cut out and standardize, reduce the sampling dimension in the face 2D and 2.5D.Wherein 3D surface trimming is by being in a radius Vertex is saved in the sphere of 100mm to realize, in the+z direction, circle center distance nose 20mm removes spine by threshold method, And hole filling process is carried out, using bilateral smoothing filter, white noise is removed while retaining edge.
Facial markers: 6 anchor points, i.e. left eye center, right eye center, nose, the left corners of the mouth, the right side are defined on face first Its craft is labeled on RGB image by the corners of the mouth, chin, and the point correspondence then established according to us directly finds depth map Upper corresponding position and three-dimensional point.
On the other hand, as shown in figure 5, the embodiment of the invention also provides a kind of three-dimensional faces based on Kinect sensor Data acquisition facility, comprising:
Module 10 is obtained, for obtaining the rgb image data and depth image data of face;
Space conversion module 20 is converted in depth camera coordinate system for carrying out space to the depth image data Face three-dimensional point coordinate data;
Projection module 30 is used for face three-dimensional point coordinate data projection in the depth camera coordinate system to RGB camera Face three-dimensional point coordinate data in RGB camera coordinate system are obtained on coordinate system;
Color mapping module 40, for sitting the RGB color information MAP of the rgb image data to the RGB camera Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data in mark system.
The present embodiment additionally provides the three-dimensional based on Kinect sensor of corresponding above-mentioned three-dimensional face data acquisition methods Human face data acquisition device, the apparatus structure is simple, including obtains module, space conversion module, projection module and color and reflect Module is penetrated, three-dimensional face images data are obtained based on Kinect sensor three-dimensional face data acquisition device using the present embodiment The three-dimensional data quality for solving Kinect capture is relatively low, and the data of " blindness point " missing, depth resolution is relatively low, The problems such as noise is big.
As one for example, the three-dimensional face data acquisition device based on Kinect sensor of the embodiment of the present invention The acquisition module 10 be Kinect sensor, the Kinect sensor includes:
RGB camera unit, for obtaining the rgb image data;
Depth camera unit, for obtaining the depth image data.
Optionally, the depth camera unit includes: IR Infrared laser emission device and IR camera, the IR infrared laser hair For emitter for projecting default speckle patterns into scene, the IR camera is used to shoot the anti-of the default speckle patterns of projection Image is penetrated to obtain the depth image data.
Another embodiment according to the present invention, the projection module 30 include:
Converting unit is sat for face three-dimensional point coordinate data in the depth camera coordinate system to be transformed into RGB camera Mark face three-dimensional point coordinate data in the RGB camera coordinate system fastened;
Unit is corrected, carries out distortion correction with to face three-dimensional point coordinate data in the RGB camera coordinate system;
Coordinate map unit is mapped to for face three-dimensional point coordinate data in the RGB camera coordinate system after correcting RGB image origin obtains the position data of face three-dimensional point in RGB camera coordinate system.
Another embodiment according to the present invention, the color mapping module 40 include:
Corresponding relationship determination unit, for determining the corresponding relationship of the rgb image data and the depth image data;
Color map unit, for being arrived the RGB color information MAP of the rgb image data according to the corresponding relationship Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data in the RGB camera coordinate system;
Recording unit, for recording the colored three-dimensional point cloud.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (10)

1. a kind of three-dimensional face data acquisition methods based on Kinect sensor characterized by comprising
Obtain the rgb image data and depth image data of face;
Space is carried out to the depth image data and is converted to face three-dimensional point coordinate data in depth camera coordinate system;
RGB camera will be obtained in face three-dimensional point coordinate data projection to RGB camera coordinate system in the depth camera coordinate system Face three-dimensional point coordinate data in coordinate system;
By the RGB color information MAP of the rgb image data into the RGB camera coordinate system face three-dimensional point coordinate data On obtain colored three-dimensional point cloud.
2. the three-dimensional face data acquisition methods according to claim 1 based on Kinect sensor,
It is characterized in that, the step of rgb image data and depth image data for obtaining face, includes:
The rgb image data is obtained using RGB camera;
The depth image data is obtained using depth camera.
3. the three-dimensional face data acquisition methods according to claim 2 based on Kinect sensor, which is characterized in that institute Stating the step of obtaining the depth image data using depth camera includes:
The disparity map of face is obtained using depth camera;
The depth image data is calculated according to the disparity map of the face based on the triangulation of Kinect sensor.
4. the three-dimensional face data acquisition methods according to claim 1 based on Kinect sensor, which is characterized in that institute RGB camera seat will be obtained by stating in face three-dimensional point coordinate data projection to RGB camera coordinate system in the depth camera coordinate system The step of face three-dimensional point coordinate data, includes: in mark system
Face three-dimensional point coordinate data in the depth camera coordinate system are transformed into the RGB phase obtained on RGB camera coordinate system Face three-dimensional point coordinate data in machine coordinate system;
Distortion correction is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Face three-dimensional point coordinate data in RGB camera coordinate system described after correction are mapped to RGB image origin and obtain RGB camera The position data of face three-dimensional point in coordinate system.
5. the three-dimensional face data acquisition methods based on Kinect sensor according to claim 1, which is characterized in that described The RGB color information MAP of the rgb image data is obtained in face three-dimensional point coordinate data into the RGB camera coordinate system Include: to the step of colored three-dimensional point cloud
Determine the corresponding relationship of the rgb image data and the depth image data;
According to the corresponding relationship by the RGB color information MAP of the rgb image data people into the RGB camera coordinate system Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data;
Record the colored three-dimensional point cloud.
6. the three-dimensional face data acquisition methods based on Kinect sensor according to claim 5, which is characterized in that described The step of determining the corresponding relationship of the rgb image data and the depth image data include:
Noise remove is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Facial key point label is carried out to face three-dimensional point coordinate data in the RGB camera coordinate system;
Determine the three-dimensional of the facial key point of face three-dimensional point coordinate data and the depth image in the RGB camera coordinate system Corresponding relationship between point.
7. a kind of three-dimensional face data acquisition device based on Kinect sensor characterized by comprising
Module is obtained, for obtaining the rgb image data and depth image data of face;
Space conversion module is converted to face three in depth camera coordinate system for carrying out space to the depth image data Tie up point coordinate data;
Projection module is used for face three-dimensional point coordinate data projection in the depth camera coordinate system to RGB camera coordinate system On obtain face three-dimensional point coordinate data in RGB camera coordinate system;
Color mapping module, for by the RGB color information MAP of the rgb image data into the RGB camera coordinate system Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data.
8. the three-dimensional face data acquisition device according to claim 7 based on Kinect sensor,
It is characterized in that, the acquisition module is Kinect sensor, the Kinect sensor includes:
RGB camera unit, for obtaining the rgb image data;
Depth camera unit, for obtaining the depth image data;
Wherein, the depth camera unit includes: IR Infrared laser emission device and IR camera, and the IR Infrared laser emission device is used In projecting default speckle patterns into scene, the IR camera is used to shoot the reflected image of the default speckle patterns of projection To obtain the depth image data.
9. the three-dimensional face data acquisition device according to claim 7 based on Kinect sensor, which is characterized in that institute Stating projection module includes:
Converting unit, for face three-dimensional point coordinate data in the depth camera coordinate system to be transformed into RGB camera coordinate system On face three-dimensional point coordinate data in obtained RGB camera coordinate system;
Unit is corrected, carries out distortion correction with to face three-dimensional point coordinate data in the RGB camera coordinate system;
Coordinate map unit is mapped to RGB figure for face three-dimensional point coordinate data in the RGB camera coordinate system after correcting As origin obtains the position data of face three-dimensional point in RGB camera coordinate system.
10. the three-dimensional face data acquisition device based on Kinect sensor according to claim 7, which is characterized in that institute Stating color mapping module includes:
Corresponding relationship determination unit, for determining the corresponding relationship of the rgb image data and the depth image data;
Color map unit, for according to the corresponding relationship by the RGB color information MAP of the rgb image data to described Colored three-dimensional point cloud is obtained in face three-dimensional point coordinate data in RGB camera coordinate system;
Recording unit, for recording the colored three-dimensional point cloud.
CN201910145985.XA 2019-02-27 2019-02-27 A kind of three-dimensional face data acquisition methods and device based on Kinect sensor Pending CN110059537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910145985.XA CN110059537A (en) 2019-02-27 2019-02-27 A kind of three-dimensional face data acquisition methods and device based on Kinect sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910145985.XA CN110059537A (en) 2019-02-27 2019-02-27 A kind of three-dimensional face data acquisition methods and device based on Kinect sensor

Publications (1)

Publication Number Publication Date
CN110059537A true CN110059537A (en) 2019-07-26

Family

ID=67316494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910145985.XA Pending CN110059537A (en) 2019-02-27 2019-02-27 A kind of three-dimensional face data acquisition methods and device based on Kinect sensor

Country Status (1)

Country Link
CN (1) CN110059537A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110445982A (en) * 2019-08-16 2019-11-12 深圳特蓝图科技有限公司 A kind of tracking image pickup method based on six degree of freedom equipment
CN111160278A (en) * 2019-12-31 2020-05-15 河南中原大数据研究院有限公司 Face texture structure data acquisition method based on single image sensor
CN112529948A (en) * 2020-12-25 2021-03-19 南京林业大学 Mature pomegranate positioning method based on Mask R-CNN and 3-dimensional sphere fitting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110020720A (en) * 2009-08-24 2011-03-03 삼성전자주식회사 3 dimension face capturing apparatus and method thereof
CN104680135A (en) * 2015-02-09 2015-06-03 浙江大学 Three-dimensional human face mark point detection method capable of resisting expression, posture and shielding changes
CN105306922A (en) * 2014-07-14 2016-02-03 联想(北京)有限公司 Method and device for obtaining depth camera reference diagram
CN107169475A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of face three-dimensional point cloud optimized treatment method based on kinect cameras
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110020720A (en) * 2009-08-24 2011-03-03 삼성전자주식회사 3 dimension face capturing apparatus and method thereof
CN105306922A (en) * 2014-07-14 2016-02-03 联想(北京)有限公司 Method and device for obtaining depth camera reference diagram
CN104680135A (en) * 2015-02-09 2015-06-03 浙江大学 Three-dimensional human face mark point detection method capable of resisting expression, posture and shielding changes
CN107169475A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of face three-dimensional point cloud optimized treatment method based on kinect cameras
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
席小霞;宋文爱;邱子璇;史磊;: "基于RGB-D值的三维图像重建系统研究", 测试技术学报, no. 05, 30 October 2015 (2015-10-30), pages 409 - 415 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110445982A (en) * 2019-08-16 2019-11-12 深圳特蓝图科技有限公司 A kind of tracking image pickup method based on six degree of freedom equipment
CN110445982B (en) * 2019-08-16 2021-01-12 深圳特蓝图科技有限公司 Tracking shooting method based on six-degree-of-freedom equipment
CN111160278A (en) * 2019-12-31 2020-05-15 河南中原大数据研究院有限公司 Face texture structure data acquisition method based on single image sensor
CN111160278B (en) * 2019-12-31 2023-04-07 陕西西图数联科技有限公司 Face texture structure data acquisition method based on single image sensor
CN112529948A (en) * 2020-12-25 2021-03-19 南京林业大学 Mature pomegranate positioning method based on Mask R-CNN and 3-dimensional sphere fitting

Similar Documents

Publication Publication Date Title
CN105427385B (en) A kind of high-fidelity face three-dimensional rebuilding method based on multilayer deformation model
US9438878B2 (en) Method of converting 2D video to 3D video using 3D object models
CN104992441B (en) A kind of real human body three-dimensional modeling method towards individualized virtual fitting
CN106909875B (en) Face type classification method and system
US7221809B2 (en) Face recognition system and method
JP6125188B2 (en) Video processing method and apparatus
WO2015188684A1 (en) Three-dimensional model reconstruction method and system
CN109872397A (en) A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision
CN110059537A (en) A kind of three-dimensional face data acquisition methods and device based on Kinect sensor
CN108510545A (en) Space-location method, space orientation equipment, space positioning system and computer readable storage medium
CN109685913A (en) Augmented reality implementation method based on computer vision positioning
CN110059602B (en) Forward projection feature transformation-based overlook human face correction method
CN106462943A (en) Aligning panoramic imagery and aerial imagery
CN109670390A (en) Living body face recognition method and system
CN103593641B (en) Object detecting method and device based on stereo camera
CN109752855A (en) A kind of method of hot spot emitter and detection geometry hot spot
CN104596442B (en) A kind of device and method of assist three-dimensional scanning
CN109903377A (en) A kind of three-dimensional face modeling method and system without phase unwrapping
CN114766042A (en) Target detection method, device, terminal equipment and medium
US9558406B2 (en) Image processing apparatus including an object setting section, image processing method, and program using the same
CN107292956A (en) A kind of scene reconstruction method assumed based on Manhattan
JP5419757B2 (en) Face image synthesizer
CN108052814A (en) A kind of 3D authentication systems
CN110348344A (en) A method of the special facial expression recognition based on two and three dimensions fusion
CN116597488A (en) Face recognition method based on Kinect database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination