CN109583304A - A kind of quick 3D face point cloud generation method and device based on structure optical mode group - Google Patents

A kind of quick 3D face point cloud generation method and device based on structure optical mode group Download PDF

Info

Publication number
CN109583304A
CN109583304A CN201811281494.XA CN201811281494A CN109583304A CN 109583304 A CN109583304 A CN 109583304A CN 201811281494 A CN201811281494 A CN 201811281494A CN 109583304 A CN109583304 A CN 109583304A
Authority
CN
China
Prior art keywords
face
infrared
image
coding
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811281494.XA
Other languages
Chinese (zh)
Inventor
刘立恒
葛晨阳
刘欣
谢艳梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd
Original Assignee
NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd filed Critical NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd
Priority to CN201811281494.XA priority Critical patent/CN109583304A/en
Publication of CN109583304A publication Critical patent/CN109583304A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A kind of quick 3D face point cloud generation method and device based on structure optical mode group, this method comprises: S100: identifying the image-region where locating human face by the RGB camera or infrared camera of structure optical mode group, identify human face region in RGB image or infrared image;S200: the infrared coding projector of Unclosing structure optical mode group, infrared camera collect infrared coding image, and the human face region is mapped in infrared coding image, obtain effective face coding region;S300: effective face coding region is decoded, and obtains depth information corresponding to effective face coding region;S400: the depth information of effective face coding region is converted into 3D face point cloud by coordinate.This method is conducive to that computing resource of the structure optical mode group in smart phone 3D self-timer, face unlock, 3D scan rebuilding application process is greatly reduced, and is conducive to real-time points cloud processing.

Description

A kind of quick 3D face point cloud generation method and device based on structure optical mode group
Technical field
The disclosure belongs to visual processes and field of artificial intelligence, and in particular to a kind of based on the quick of structure optical mode group 3D face point cloud generation method and device.
Background technique
The man-machine interaction mode of natural harmony is dreamboat of the mankind to Manipulation of the machine, causes a machine to understand people in nature The order that state is transmitted.Depth information is obtained using image processing techniques to carry out the real-time identification of 3-D image and act to catch It catches, enables a person to interact and be possibly realized with the natural ways such as expression, gesture, body-sensing movement and terminal.Depth perception technology is The core technology of natural human-machine interaction has widely in fields such as machine vision, intelligent monitoring, 3D recognition of face, 3D printings Application prospect is gradually extended to other intelligent terminal, including smart television, smart phone, PC/ plate electricity from game machine peripheral hardware Brain, intelligent appliance etc. bring control mode and completely new man-machine interaction experience as " science fiction " for user.
With the development of science and technology, face recognition technology is increasingly mature, a variety of apparatus and systems such as laptop, movement Terminal and access control system etc., which are begun through, obtains video to be detected, and by detecting the face in video to be detected to complete pair The identification of identity.But since the planar graph of legitimate user's face is easier to obtain, illegal user can be by view to be detected The photo etc. of the prosthese face such as legitimate user of legitimate user is added in frequency, and uses the view to be detected for including the prosthese face Frequency compromises user experience to reduce the accuracy of recognition of face by recognition of face.Apple iPhone X uses structure Optical mode group carries out 3D recognition of face, original 2D face recognition accuracy rate is promoted to million in five a ten thousandths or so/ One, so that 3D recognition of face mode is possibly realized for smart phone booting, Face ID and face payment.Due to apple IPhone X is to carry out speckle decoding to whole picture figure, and how fast and accurately calculation amount is relatively large, and the time is relatively long, Obtaining 3D face point cloud is the key that realize 3D recognition of face.
Summary of the invention
In consideration of it, present disclose provides a kind of quick 3D face point cloud generation method based on structure optical mode group, comprising:
S100: identifying the image-region where locating human face by the RGB camera or infrared camera of structure optical mode group, Human face region is identified in RGB image or infrared image;
S200: the infrared coding projector of Unclosing structure optical mode group, infrared camera collect infrared coding image, will The human face region is mapped in infrared coding image, obtains effective face coding region;
S300: effective face coding region is decoded, and is obtained corresponding to effective face coding region Depth information;
S400: the depth information of effective face coding region is converted into 3D face point cloud by coordinate.
The disclosure additionally provides a kind of quick 3D face point cloud generating means based on structure optical mode group, including structure optical mode Elementary area, effective face coding region extraction unit, human face region depth perception unit, 3D where group, identification locating human face Face point cloud generation unit, wherein
The structure optical mode group, including the infrared coding projector, infrared camera, infrared floodlight source;It is wherein described infrared The projector is encoded, projective structure pumped FIR laser pattern is used for;The infrared camera, with the infrared coding projector or infrared floodlight source It works together together, for acquiring infrared coding figure or infrared figure;The infrared floodlight source is shone for generating uniform infrared light Penetrate target object or projecting space;
Elementary area where the identification locating human face, for the infrared camera identification positioning people by structure optical mode group Image-region where face, identifies human face region in infrared image;
Effective face coding region extraction unit is used to open the infrared coding projector of structure optical mode group, infrared Camera collects infrared coding image, and the human face region identified in infrared image is mapped to infrared coding figure As in, effective face coding region is obtained;
The human face region depth perception unit, for combining corresponding reference to compile effective face coding region Code region is decoded, and obtains depth information corresponding to effective face coding region;
The 3D face point cloud generation unit, for turning the depth information of effective face coding region by coordinate It is changed to 3D face point cloud.
Through the above technical solutions, only carrying out depth decoding to effective face coding region of the face indicated, generate 3D face point cloud, can generate face 3D point cloud in real time, advantageously reduce structure optical mode group in 3D self-timer, recognition of face, 3D face Calculation amount during scan rebuilding achievees the purpose that handle in real time.
Detailed description of the invention
Fig. 1 is a kind of quick 3D face point Yun Shengcheng based on structure optical mode group provided in an embodiment of the present disclosure The flow diagram of method;
Fig. 2 is a kind of quick 3D face point Yun Shengcheng based on structure optical mode group provided in an embodiment of the present disclosure The structural schematic diagram of device;
Fig. 3 is RGB Face datection provided in an embodiment of the present disclosure and extraction flow chart;
Fig. 4 is the depth decoding process figure of random speckle coding pattern provided in an embodiment of the present disclosure;
Fig. 5 is image coordinate system schematic diagram provided in an embodiment of the present disclosure;
Fig. 6 is three coordinate relation schematic diagrams provided in an embodiment of the present disclosure;
Fig. 7 is the structural schematic diagram of structure optical mode group provided in an embodiment of the present disclosure.
Specific embodiment
1 to attached drawing 7, the present invention is further described in detail with reference to the accompanying drawing.
In one embodiment, referring to Fig. 1, it discloses a kind of quick 3D face point Yun Shengcheng based on structure optical mode group Method, comprising:
S100: identifying the image-region where locating human face by the RGB camera or infrared camera of structure optical mode group, Human face region is identified in RGB image or infrared image;
S200: the infrared coding projector of Unclosing structure optical mode group, infrared camera collect infrared coding image, will The human face region is mapped in infrared coding image, obtains effective face coding region;
S300: effective face coding region is decoded, and is obtained corresponding to effective face coding region Depth information;
S400: the depth information of effective face coding region is converted into 3D face point cloud by coordinate.
Human face region is wherein identified in RGB image or infrared image can be schemed with box in RGB image or infrared IR Human face region is outlined as in.
The present embodiment is only decoded effective face coding region of the face indicated, generates 3D face point cloud, together In the case of equal platforms, comparing is decoded entire image, and calculation amount is greatly reduced.
In another embodiment, wherein structure optical mode group such as band RGB camera then to RGB camera and infrared is taken the photograph in advance As leader is set, locating human face is identified using RGB camera;It such as can benefit in conjunction with infrared floodlighting without RGB camera Locating human face is identified with IR camera.
It is understood that the present embodiment for RGB camera or it is infrared when IR camera how to identify that locating human face gives Explanation.
In another embodiment,
(1) specific steps using RGB camera identification locating human face include:
S10011 acquires RGB image as image to be detected by the RGB camera of structure optical mode group;
S10012 calculates pixel all in described image to be detected degree similar with the colour of skin, and then available Colour of skin likelihood figure, to split the area of skin color in described image to be detected;
S10013 excludes non-area of skin color, completes the screening of the image of candidate region;
S10014 makes face template, and matches to the image of candidate region with face template, is judged by detection It whether is human face region, to extract human face region;
Obviously, above-described embodiment gives the specific steps using RGB camera identification locating human face.However it needs to illustrate , equally can be incorporated into above-described embodiment using the prior art that RGB camera carries out recognition of face.The disclosure stresses Do not lie in how RGB camera positions and identify face yet.After all, pass through RGB camera and various algorithm steps in the prior art Rapid to position and identify that face has had many technologies, the present embodiment has been merely given as an embodiment of the present disclosure.
(2) specific steps using infrared camera identification locating human face include:
S10021 obtains infrared face data set and non-infrared face data set as training set, carries out to training set pre- Processing uses " integrogram " to realize the quick calculating of character numerical value, with Harr-like character representation face characteristic to train One infrared face detector;
S10022 pre-processes infrared face image to be entered, by infrared face detector judgement be face also It is non-face;Human face region is extracted according to the window of setting if it is face.
It is also understood that herein specific example using infrared camera identification locating human face specific steps.However It should be noted that equally can be incorporated into above-described embodiment using the prior art that IR camera carries out recognition of face.This public affairs That opens stresses also not lying in how IR camera positions and identify face.After all, by IR camera and various in the prior art Algorithm steps position and identify that face has had many technologies, and the present embodiment has been merely given as a kind of embodiment party of the disclosure Formula.
Further, described below is also the more specifical example of the disclosure, is taken the photograph with will pass through RGB camera or IR As head positioning, identification face, after understanding the prior art, these specific examples shall not be constituted for the disclosure the below The restriction of the inventive concept of the revealed technical solution of one embodiment and the disclosure:
In another embodiment, step S100 is specifically included:
RGB figure identification outlines human face region process, if Fig. 3 acquires RGB by the RGB camera of structure optical mode group, due to Face complexion and non-face skin distribution range juxtaposition region in YCbCr are smaller, can use Skin Color Information and separate people Face region and non-face region:
RGB is denoised, color compensating pretreatment;
It establishes complexion model: collecting a large amount of face picture samples, the sample of collection answers diversification, includes different sexes, year Age and type, and be cut out face complexion area and make statistical sample;
RGB is transformed into YCbCr color space again, counts the quantity of Cb, Cr of all pixels;According to the Gaussian mode of the colour of skin Type (note, Gauss formula), calculates mean value, while calculating variance and covariance;It will be appreciated that its other party can also be passed through Formula calculates mean value, variance, covariance.
On aforementioned base, in short, can calculate to figure the characteristics of using face complexion and non-face skin distribution All pixel degree similar with the colour of skin, i.e. colour of skin likelihood score as in, and then colour of skin likelihood figure can be obtained.And then it can incite somebody to action Area of skin color in image split (for example, using colour of skin likelihood figure, carry out binary conversion treatment, it is of course also possible to use its His conventional treatment means).
Illustratively, it for the noise in skin color segmentation figure, can be denoised to be more advantageous to processing.Due to people There is the non-area of skin color as eyes, mouth in face, so that facial image can generate many cavities after above-mentioned processing, row Except those not empty regions, human face region can be more accurately obtained.Judge to be by the empty number in zoning No is human face region, and last region is screened in completion.
Such as: production face template chooses some RGB facial images, and gray level image is calculated;Manual interception face Region sets template size according to face priori knowledge;The intensity profile of face template is standardized.To by the colour of skin Image after screening is matched with face template, input tape matching image, is called template, is sized as width W* long The face template for spending H (W, H are integer, indicate pixel number), by calculating related coefficient, to obtain matched similar Degree, related coefficient size compared with given threshold, if it is greater than given threshold, which is human face region, otherwise is non- Human face region.And human face region is extracted.
IR figure identification outlines human face region process, and Haar characteristic value reflects the grey scale change situation of image, Harr feature Characteristic value be defined as white rectangle pixel and subtract black rectangle pixel and, since face and inhuman face image are in same position The size of upper value is different, therefore face and non-face can be distinguished with these features.Obtain infrared face data set With non-infrared face data set as training set, training set is pre-processed, with Harr-like character representation face characteristic, " integrogram " is used to realize the quick calculating of character numerical value.Some rectangles that can most represent face are picked out with Adaboost algorithm Weak Classifier is configured to a strong classifier in the way of Nearest Neighbor with Weighted Voting by feature (Weak Classifier).If training is obtained Dry strong classifier is composed in series the cascade filtering of a cascade structure, and cascade structure can effectively improve the detection speed of classifier Degree.After training an infrared face detector, infrared face image to be entered is pre-processed, is detected by infrared face Device is it may determine that be face or non-face.If it is face according to the window of setting, infrared face is extracted.
In another embodiment, step S200 is specifically included: if structure optical mode group band RGB camera, in Outer ginseng and coordinate mapping, obtain effective face coding region corresponding to RGB human face region;IR is found by demarcating mapping relations The human face region of figure, main process include acquisition scaling board image, the calibration inside and outside parameter of camera, and RGB camera and IR are imaged Head is matched:
Scaling board image is acquired, is demarcated using chessboard method, shoots several different angles with RGB camera and IR camera Chessboard picture under degree, each angle acquire IR image and RGB image simultaneously.
Demarcate RGB camera and the inside and outside parameter of IR camera, using existing calibration tool case (matlab Calibration Box, Opencv Calibration Box) it is demarcated, obtain the inside and outside parameter of RGB camera and IR camera.
RGB camera and IR camera are matched:
(1) RGB human face region image element is transformed into space coordinate
Prgb=dis*inv (t) * p_rgb, wherein p_rgb is the pixel coordinate matrix of RGB image, and dis is distance value, Inv (t) is RGB camera internal reference inverse of a matrix matrix;Prgb is RGB image space coordinate.
(2) different with the coordinate of RGB camera due to the coordinate of IR camera, need a rotation translation transformation to contact Carrying out Pir=R*Prgb+T, wherein Pir is the space coordinate of infrared image, and R is spin matrix, and T is translation matrix.
(3) Uir=fxir*Pir (1)/Pir (3)+Cxir;
Vir=fyir*Pir (2)/Pir (3)+cyir;
Wherein (1) X-Pir, Y-Pir (2), Z-Pir (3);(Uir, Vir) is infrared image element coordinate, and fxir, fyir are IR camera focal length, cxir, cyir are IR camera imaging central points.
If structure optical mode group, without RGB camera, the human face region in infrared image corresponds infrared coding figure Effective face coding region as in.The infrared coding projector can be projected to be dissipated including random speckle coding pattern, rule The structure light coding pattern of spot coding pattern, character array coding pattern.
In another embodiment, the depth decoding process in step S300 includes the block to random speckle coding pattern It is sought with disparity computation mode, or to regular speckle encoding pattern, the Symbol recognition of character array coding pattern, correction and registration Parallax.Acquired parallax, that is, offset, it is burnt in conjunction with depth calculation formula and known reference encoder image distance, IR camera Away from, pixel physics spacing, the depth information of human face region is calculated.
As an example, the depth decoding process to random speckle coding pattern is as follows, referring to fig. 4:
3.1 pretreatment.Coherence enhancing and binary conversion treatment are carried out to the infrared speckle encoding figure of IR camera acquisition, made Infrared speckle encoding image must be inputted and switch to binaryzation speckle image, in order to subsequent matching primitives;
The rotation of 3.2 speckle images.The selection of binaryzation speckle image is rotated by 90 ° clockwise or counterclockwise or is not revolved Turn, Block- matching search and depth calculation are carried out to X-direction or Y direction with selection;
3.3 Block- matchings, binaryzation speckle image combine cured reference speckle image (to be incident upon the vertical of known distance In plane and through identical pre-treatment step) block-based motion estimation is carried out, in the certain search range of reference speckle image Blocks and optimal matching blocks are obtained by similarity-rough set, obtain input binaryzation speckle image block central point and with reference in speckle image Offset between blocks and optimal matching blocks central point, i.e. parallax;
It is corresponding to obtain each pixel of binaryzation speckle pattern according to monocular structure light depth calculation formula for 3.4 depth calculations Depth value information, calculation formula is as follows:Wherein f is the focal length of IR camera, and s is infrared coding projection Parallax range between device and IR camera, d are the known distances with reference to speckle image, and Δ m is offset, and μ is IR camera Sensor pixel point spacing, d ' are depth value, and binaryzation speckle pattern is converted to depth map finally by above-mentioned formula.
In another embodiment, step S400 is specifically included: after obtaining human face region depth information, needing to carry out Transformation between three image coordinate system, camera coordinate system and world coordinate system coordinate systems, will include human face region first Pixel on the image of depth information is transformed into camera coordinate system and obtains three-dimensional vertices coordinate, then further according to transformation matrix It transforms to and obtains 3D face point cloud coordinate under real world coordinates system.Detailed process is as follows,
Image coordinate system has physical significance, such as Fig. 5 using object unit coordinate indicates coordinate.Wherein the plane of delineation with The intersection point (the usually central point of image) of camera optical axis is the origin O1 of the coordinate system, and coordinate is (u0, v0), dx and dy The physical size of each pixel on x-axis and y-axis direction is respectively indicated, then certain pixel in the same image is in pixel unit coordinate Transformational relation homogeneous coordinates and matrix form under system and physical unit coordinate system can be expressed as,
Camera coordinate system, using the optical center of camera as origin Oc, wherein ZcAxis is overlapped with camera optical axis and puts down perpendicular to imaging Face, photography direction are that it is positive, OcO1For focal length of camera f, Xc、YcAxis respectively with the x-axis in image physical coordinates representation, y Axis is parallel, and three coordinate system relationships are as shown in Figure 6.Relationship between camera coordinate system and image coordinate system (physical representation) It is expressed as with homogeneous coordinates
World coordinate system is the actual position of object or video camera in actual environment, XW、YW、ZWIt is world coordinate system respectively Three axis, if in three-dimensional space homogeneous coordinates of the certain point M in world coordinate system and camera coordinate system be respectively (Xw, Yw, Zw, 1)T(Xc, Yc, Zc, 1)T, then both Conversion Relations are as follows:
Wherein R indicates that one 3 × 3 spin matrix, t indicate that one 3 × 1 translation vector, T indicate between two coordinate systems One 4 × 4 transformation matrix, according to turning for the available image pixel coordinates system of the above figure transformational relation and world coordinate system Change relationship:
Wherein,x, αy, u0, v0) be video camera inner parameter, (R, t) be video camera outside Parameter, the position by video camera relative to world coordinate system determine.
In another embodiment, a kind of quick 3D face point cloud generating means based on structure optical mode group are also disclosed, Including elementary area, effective face coding region extraction unit, human face region depth where structure optical mode group, identification locating human face Sension unit, 3D face point cloud generation unit, wherein
The structure optical mode group, including the infrared coding projector, infrared camera, infrared floodlight source;It is wherein described infrared The projector is encoded, projective structure pumped FIR laser pattern is used for;The infrared camera, with the infrared coding projector or infrared floodlight source It works together together, for acquiring infrared coding figure or infrared figure;The infrared floodlight source is shone for generating uniform infrared light Penetrate target object or projecting space;
Elementary area where the identification locating human face, for the infrared camera identification positioning people by structure optical mode group Image-region where face, identifies human face region in infrared image;
Effective face coding region extraction unit is used to open the infrared coding projector of structure optical mode group, infrared Camera collects infrared coding image, and the human face region identified in infrared image is mapped to infrared coding figure As in, effective face coding region is obtained;
The human face region depth perception unit, for combining corresponding reference to compile effective face coding region Code region is decoded, and obtains depth information corresponding to effective face coding region;
The 3D face point cloud generation unit, for turning the depth information of effective face coding region by coordinate It is changed to 3D face point cloud.
In another embodiment, the structure optical mode group further includes RGB camera;Wherein, the identification locating human face Place image module identifies the image-region where locating human face for the RGB camera by structure optical mode group, schemes in RGB Human face region is identified as in;Effective face coding region extraction module, is used to open the infrared coding of structure optical mode group The projector, infrared camera collect infrared coding image, and the human face region identified in RGB image is mapped to In infrared coding image, effective face coding region is obtained.
Wherein, the structure optical mode group, as shown in fig. 7, comprises the infrared coding projector 400, IR camera 200, red Outer floodlight source 300, RGB camera 100 may include including, and demarcate RGB camera and IR in advance if including The internal reference of camera and outer ginseng;
The wherein infrared coding projector 400, for projecting random speckle coding pattern, regular speckle encoding pattern, symbol The structure light codings pattern such as array code pattern is generally made using laser diode LD or vertical cavity surface emitting laser VCSEL For laser light source or MEMS grenade instrumentation, micro projector, for projecting the structure light coding pattern of certain FoV range;
Wherein IR camera 200, for working together together with the infrared coding projector or infrared floodlight source, for acquiring Infrared coding figure or IR figure;
Wherein infrared floodlight source 300, it is general to use for generating uniform Infrared irradiation target object or projecting space VCSEL or LED are as light source;
Wherein RGB camera 100 generally uses the camera of high pixel high image quality, by shooting environmental and automatic exposure Deng influencing, captured frame frequency is variable, exports a field sync signal by RGB camera, then allows the synchronous bat of IR camera Take the photograph a frame, two frames or multiframe.
Although the above embodiments are completed in specific system or method, so itself and the non-limiting present invention.Simultaneously for The ROI (area-of-interest) in infrared coding image is specified to carry out depth decoding by processor control or register configuration mode Know otherwise for 3D, it is essentially identical with the implementation steps of the invention.
Although embodiment of the present invention is described in conjunction with attached drawing above, the invention is not limited to above-mentioned Specific embodiments and applications field, above-mentioned specific embodiment are only schematical, directiveness, rather than restricted 's.Those skilled in the art are under the enlightenment of this specification and in the range for not departing from the claims in the present invention and being protected In the case where, a variety of forms can also be made, these belong to the column of protection of the invention.

Claims (10)

1. a kind of quick 3D face point cloud generation method based on structure optical mode group, comprising:
S100: identifying the image-region where locating human face by the RGB camera or infrared camera of structure optical mode group, Human face region is identified in RGB image or infrared image;
S200: the infrared coding projector of Unclosing structure optical mode group, infrared camera collect infrared coding image, will be described Human face region is mapped in infrared coding image, obtains effective face coding region;
S300: effective face coding region is decoded, and obtains depth corresponding to effective face coding region Information;
S400: the depth information of effective face coding region is converted into 3D face point cloud by coordinate.
2. according to the method described in claim 1, wherein, it is preferred that step S100 is specifically included:
If structure optical mode group band RGB camera, is in advance demarcated RGB camera and infrared camera, is imaged using RGB Head identification locating human face;If structure optical mode group is known in conjunction with infrared floodlighting using infrared camera without RGB camera Other locating human face.
3. according to the method described in claim 2, wherein,
(1) specific steps using RGB camera identification locating human face include:
S10011 acquires RGB image as image to be detected by the RGB camera of structure optical mode group;S10012 calculates institute State pixel all in image to be detected degree similar with the colour of skin, and then colour of skin likelihood figure can be obtained, so as to will it is described to Area of skin color in detection image is split;
S10013 excludes non-area of skin color, completes the screening of the image of candidate region;
S10014 makes face template, and matches to the image of candidate region with face template, is judged whether by detection For human face region, so that human face region is extracted;
(2) specific steps using infrared camera identification locating human face include:
S10021 obtains infrared face data set and non-infrared face data set as training set, pre-processes to training set, With Harr-like character representation face characteristic, " integrogram " is used to realize the quick calculating of character numerical value, so as to train one it is red Outer human-face detector;
S10022 pre-processes infrared face image to be entered, is face also right and wrong by the judgement of infrared face detector Face;Human face region is extracted according to the window of setting if it is face.
4. according to the method described in claim 1, wherein, step S200 is specifically included:
If structure optical mode group band RGB camera, is mapped using inside and outside ginseng and coordinate, is obtained corresponding to RGB human face region Effective face coding region;If structure optical mode group, without RGB camera, the human face region in infrared image corresponds red Effective face coding region in outer coding image.
5. according to the method described in claim 1, wherein,
The infrared coding projector can be projected including random speckle coding pattern, regular speckle encoding pattern, character array The structure light coding pattern of coding pattern.
6. according to the method described in claim 1, wherein, decoding includes the block to random speckle coding pattern in step S300 With disparity computation mode, or: Symbol recognition, correction and the registration of regular speckle encoding pattern and character array coding pattern are asked Take parallax.
7. according to the method described in claim 1, wherein, step S400 is specifically included:
After the depth information for obtaining effective face coding region, carries out image coordinate system, camera coordinate system and the world and sit Transformation between three coordinate systems of mark system: first by the pixel on the image of the depth information comprising effective face coding region It is transformed into camera coordinate system and obtains three-dimensional vertices coordinate, then transform to further according to transformation matrix and obtained under real world coordinates system To 3D face point cloud coordinate.
8. a kind of quick 3D face point cloud generating means based on structure optical mode group, comprising:
Elementary area, effective face coding region extraction unit, human face region depth where structure optical mode group, identification locating human face Sension unit, 3D face point cloud generation unit, wherein
The structure optical mode group, including the infrared coding projector, infrared camera and infrared floodlight source;The wherein infrared coding The projector is used for projective structure pumped FIR laser pattern;The infrared camera, together with the infrared coding projector or infrared floodlight source Associated working, for acquiring infrared coding figure or infrared figure;The infrared floodlight source, for generating uniform Infrared irradiation mesh Mark object or projecting space;
Elementary area where the identification locating human face, for identifying locating human face institute by the infrared camera of structure optical mode group Image-region, human face region is identified in infrared image;
Effective face coding region extraction unit, is used to open the infrared coding projector of structure optical mode group, infrared photography Head collects infrared coding image, and the human face region identified in infrared image is mapped to infrared coding image In, obtain effective face coding region;
The human face region depth perception unit, for effective face coding region to be combined corresponding reference encoder area Domain is decoded, and obtains depth information corresponding to effective face coding region;
The 3D face point cloud generation unit, for being converted to the depth information of effective face coding region by coordinate 3D face point cloud.
9. device according to claim 8, wherein
The structure optical mode group further includes RGB camera;Wherein,
Elementary area where the identification locating human face, for identifying locating human face institute by the RGB camera of structure optical mode group Image-region, human face region is identified in RGB image;
Effective face coding region extraction unit, is used to open the infrared coding projector of structure optical mode group, infrared photography Head collects infrared coding image, and the human face region identified in RGB image is mapped in infrared coding image, Obtain effective face coding region.
10. device according to claim 9, wherein RGB camera uses the camera of high pixel high image quality, captured Frame frequency it is variable.
CN201811281494.XA 2018-10-23 2018-10-23 A kind of quick 3D face point cloud generation method and device based on structure optical mode group Pending CN109583304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811281494.XA CN109583304A (en) 2018-10-23 2018-10-23 A kind of quick 3D face point cloud generation method and device based on structure optical mode group

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811281494.XA CN109583304A (en) 2018-10-23 2018-10-23 A kind of quick 3D face point cloud generation method and device based on structure optical mode group

Publications (1)

Publication Number Publication Date
CN109583304A true CN109583304A (en) 2019-04-05

Family

ID=65920836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811281494.XA Pending CN109583304A (en) 2018-10-23 2018-10-23 A kind of quick 3D face point cloud generation method and device based on structure optical mode group

Country Status (1)

Country Link
CN (1) CN109583304A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246212A (en) * 2019-05-05 2019-09-17 上海工程技术大学 A kind of target three-dimensional rebuilding method based on self-supervisory study
CN110414435A (en) * 2019-07-29 2019-11-05 深兰科技(上海)有限公司 The generation method and equipment of three-dimensional face data based on deep learning and structure light
CN110490865A (en) * 2019-08-22 2019-11-22 易思维(杭州)科技有限公司 Stud point cloud segmentation method based on the high light-reflecting property of stud
CN111749550A (en) * 2020-06-30 2020-10-09 德施曼机电(中国)有限公司 Intelligent door lock system based on 3D structured light, intelligent door lock and storage medium
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system
CN112861764A (en) * 2021-02-25 2021-05-28 广州图语信息科技有限公司 Face recognition living body judgment method
CN112883870A (en) * 2021-02-22 2021-06-01 北京中科深智科技有限公司 Face image mapping method and system
CN113255587A (en) * 2021-06-24 2021-08-13 深圳市光鉴科技有限公司 Face-brushing payment system based on depth camera
CN113379893A (en) * 2021-05-27 2021-09-10 杭州小肤科技有限公司 Method for synthesizing 3D face model by utilizing optical reflection
CN113408377A (en) * 2021-06-03 2021-09-17 山东交通学院 Face living body detection method based on temperature information
CN113673285A (en) * 2020-05-15 2021-11-19 深圳市光鉴科技有限公司 Depth reconstruction method, system, device and medium for snapshot by depth camera
CN113888614A (en) * 2021-09-23 2022-01-04 北京的卢深视科技有限公司 Depth recovery method, electronic device, and computer-readable storage medium
WO2022040941A1 (en) * 2020-08-25 2022-03-03 深圳市大疆创新科技有限公司 Depth calculation method and device, and mobile platform and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169475A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of face three-dimensional point cloud optimized treatment method based on kinect cameras
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
CN107844773A (en) * 2017-11-10 2018-03-27 广东日月潭电源科技有限公司 A kind of Three-Dimensional Dynamic Intelligent human-face recognition methods and system
CN108537191A (en) * 2018-04-17 2018-09-14 广州云从信息科技有限公司 A kind of three-dimensional face identification method based on structure light video camera head
CN108665535A (en) * 2018-05-10 2018-10-16 青岛小优智能科技有限公司 A kind of three-dimensional structure method for reconstructing and system based on coding grating structured light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169475A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of face three-dimensional point cloud optimized treatment method based on kinect cameras
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
CN107844773A (en) * 2017-11-10 2018-03-27 广东日月潭电源科技有限公司 A kind of Three-Dimensional Dynamic Intelligent human-face recognition methods and system
CN108537191A (en) * 2018-04-17 2018-09-14 广州云从信息科技有限公司 A kind of three-dimensional face identification method based on structure light video camera head
CN108665535A (en) * 2018-05-10 2018-10-16 青岛小优智能科技有限公司 A kind of three-dimensional structure method for reconstructing and system based on coding grating structured light

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246212A (en) * 2019-05-05 2019-09-17 上海工程技术大学 A kind of target three-dimensional rebuilding method based on self-supervisory study
CN110246212B (en) * 2019-05-05 2023-02-07 上海工程技术大学 Target three-dimensional reconstruction method based on self-supervision learning
CN110414435A (en) * 2019-07-29 2019-11-05 深兰科技(上海)有限公司 The generation method and equipment of three-dimensional face data based on deep learning and structure light
CN110490865A (en) * 2019-08-22 2019-11-22 易思维(杭州)科技有限公司 Stud point cloud segmentation method based on the high light-reflecting property of stud
CN110490865B (en) * 2019-08-22 2022-04-01 易思维(杭州)科技有限公司 Stud point cloud segmentation method based on high light reflection characteristic of stud
CN113673285A (en) * 2020-05-15 2021-11-19 深圳市光鉴科技有限公司 Depth reconstruction method, system, device and medium for snapshot by depth camera
CN113673285B (en) * 2020-05-15 2023-09-12 深圳市光鉴科技有限公司 Depth reconstruction method, system, equipment and medium during capturing of depth camera
CN111749550A (en) * 2020-06-30 2020-10-09 德施曼机电(中国)有限公司 Intelligent door lock system based on 3D structured light, intelligent door lock and storage medium
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system
WO2022040941A1 (en) * 2020-08-25 2022-03-03 深圳市大疆创新科技有限公司 Depth calculation method and device, and mobile platform and storage medium
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN112883870A (en) * 2021-02-22 2021-06-01 北京中科深智科技有限公司 Face image mapping method and system
CN112861764A (en) * 2021-02-25 2021-05-28 广州图语信息科技有限公司 Face recognition living body judgment method
CN112861764B (en) * 2021-02-25 2023-12-08 广州图语信息科技有限公司 Face recognition living body judging method
CN113379893A (en) * 2021-05-27 2021-09-10 杭州小肤科技有限公司 Method for synthesizing 3D face model by utilizing optical reflection
CN113379893B (en) * 2021-05-27 2022-02-11 杭州小肤科技有限公司 Method for synthesizing 3D face model by utilizing optical reflection
CN113408377A (en) * 2021-06-03 2021-09-17 山东交通学院 Face living body detection method based on temperature information
CN113255587A (en) * 2021-06-24 2021-08-13 深圳市光鉴科技有限公司 Face-brushing payment system based on depth camera
CN113888614A (en) * 2021-09-23 2022-01-04 北京的卢深视科技有限公司 Depth recovery method, electronic device, and computer-readable storage medium
CN113888614B (en) * 2021-09-23 2022-05-31 合肥的卢深视科技有限公司 Depth recovery method, electronic device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN109583304A (en) A kind of quick 3D face point cloud generation method and device based on structure optical mode group
Raghavendra et al. Exploring the usefulness of light field cameras for biometrics: An empirical study on face and iris recognition
CN110147721B (en) Three-dimensional face recognition method, model training method and device
RU2431190C2 (en) Facial prominence recognition method and device
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
CN109271950B (en) Face living body detection method based on mobile phone forward-looking camera
US20150347833A1 (en) Noncontact Biometrics with Small Footprint
Medioni et al. Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models
US20160019420A1 (en) Multispectral eye analysis for identity authentication
US20170091550A1 (en) Multispectral eye analysis for identity authentication
US8755607B2 (en) Method of normalizing a digital image of an iris of an eye
WO2016010724A1 (en) Multispectral eye analysis for identity authentication
CN107563304A (en) Unlocking terminal equipment method and device, terminal device
CN112052831A (en) Face detection method, device and computer storage medium
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
CN108416291B (en) Face detection and recognition method, device and system
CN112232163B (en) Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN112232155A (en) Non-contact fingerprint identification method and device, terminal and storage medium
CN110909634A (en) Visible light and double infrared combined rapid in vivo detection method
CN112232159B (en) Fingerprint identification method, device, terminal and storage medium
Labati et al. Fast 3-D fingertip reconstruction using a single two-view structured light acquisition
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
CN108197549A (en) Face identification method and terminal based on 3D imagings
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
CN112232157B (en) Fingerprint area detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190405

RJ01 Rejection of invention patent application after publication