CN107169475B - A kind of face three-dimensional point cloud optimized treatment method based on kinect camera - Google Patents

A kind of face three-dimensional point cloud optimized treatment method based on kinect camera Download PDF

Info

Publication number
CN107169475B
CN107169475B CN201710464550.2A CN201710464550A CN107169475B CN 107169475 B CN107169475 B CN 107169475B CN 201710464550 A CN201710464550 A CN 201710464550A CN 107169475 B CN107169475 B CN 107169475B
Authority
CN
China
Prior art keywords
depth
pixel
value
face
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710464550.2A
Other languages
Chinese (zh)
Other versions
CN107169475A (en
Inventor
李纯明
尹婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710464550.2A priority Critical patent/CN107169475B/en
Publication of CN107169475A publication Critical patent/CN107169475A/en
Application granted granted Critical
Publication of CN107169475B publication Critical patent/CN107169475B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Abstract

The invention belongs to three-dimensional reconstruction fields, are related to a kind of face three-dimensional point cloud optimized treatment method based on kinect camera.Technical solution of the present invention is handled different from traditional point cloud to three dimensional form, the scheme of the invention is directly being handled depth information using the four-way image RGB-D of Kinect camera to obtain spots cloud optimization effect.Method of the invention carries out the optimization process flow of face three-dimensional point cloud using existing Kinect depth camera, avoids the expensive price of traditional high precision apparatus or the problem that general algorithm precision is not high;Inside the depth optimization the step of, when carrying out surface recovery using depth after obtained optimization, it does not take and general is directly merged using normal vector, because the fusion of normal vector, which solves surface, can generate offset error, it is solved here with the geometrical relationship of concentration gradient and normal vector, last depth is directly obtained, finally obtains the higher three-dimensional point cloud of quality using the variation of coordinate.

Description

A kind of face three-dimensional point cloud optimized treatment method based on kinect camera
Technical field
The invention belongs to three-dimensional reconstruction field, it is related at a kind of face three-dimensional point cloud optimization based on kinect camera Reason method.
Background technique
Currently, three-dimensional reconstruction is the research hotspot of computer vision field, human face three-dimensional model is even more a comparison Important direction is frequently utilized for Film Animation production, game, medical treatment etc. associated scenario.The method of three-dimensional face is obtained at present Including three kinds: modeling by hand, instrument acquisition and the modeling based on image.Modeling is needed as earliest three-dimensional modeling means by hand Professional person is wanted to carry out;It is typically to represent based on structure light and laser scanner in instrument acquisition, but in general, The cost of instrument is relatively high;Modeling based on image is mainly obtained as a result, this several width images using certain algorithm Mode is fairly simple, but is not as a result very accurate or robust due to the limitation of algorithm itself.
But with the appearance of depth camera, such as kinect, the depth of face can easily and be rapidly got, It is exactly easily to obtain the three-dimensional point cloud of face, but the depth value noise that kinect camera obtains is bigger, puts making an uproar for cloud Sound is also very greatly, to need to optimize processing to a cloud.Traditional spots cloud optimization mainly directly to the point cloud of three dimensional form into Row processing, for example, usually filling a vacancy from the deuterogenic triangle gridding of cloud aiming at the problem that missing;Not for distribution The method that equal problem is generally based on optimal projection is handled;For the method that the noise of cloud is mainly based upon projection, The method of improved filtering and method based on probability statistics.Directly three-dimensional point is handled, either also from data volume It is for result, the robustness of algorithm is all very big uncertain factor.
Summary of the invention
It is to be solved by this invention to be, aiming at above-mentioned conventional method there are problem, propose that one kind is not based on kinect The face three-dimensional point cloud optimized treatment method of camera.
The technical scheme is that as shown in Figure 1, a kind of face three-dimensional point cloud optimization processing based on kinect camera Method, which comprises the following steps:
S1, kinect camera calibration, the inside and outside parameter including depth camera and color camera;
S2, tested facial image: the camera in applying step S1 is obtained, obtains the cromogram and depth map of tested face, And the cromogram of acquisition and depth map are subjected to registration process, i.e., it is the different depth map of pixel size and cromogram one is a pair of Together, each depth value is made to have a corresponding rgb value;
S3, the image of acquisition is pre-processed, the missing point including depth map, the erroneous point of cromogram, and to depth Carry out bilateral filtering processing;
S4, face is extracted from the cromogram of acquisition, and the face extracted is projected to the depth map the inside of alignment, Obtain corresponding human face region in depth map;
S5, RGB and depth D are combined, depth optimization processing is carried out to human face region corresponding in D figure, is recycled Depth after optimization is converted into three-dimensional point cloud by the internal reference of depth camera.
Above scheme is the total technical solution of the present invention, compared with the mode that tradition is directly handled three-dimensional point, Method of the invention utilizes the four-way image RGB-D of Kinect camera, is directly handled depth information to obtain a cloud Effect of optimization, therefore conventional method is compared, method optimal speed of the invention is very fast, while result is also more stable.
Further, the step S3 method particularly includes:
For the missing point of depth D figure, it is assumed that missing pixel point coordinate is (i, j), then just above and below the point Left and right four direction scans for, and until finding the first nonzero value in each direction, then four values is averaged to come Fill the missing values of the pixel.For the erroneous point of RGB figure, the value in three channels of RGB figure is judged respectively first, if three Channel is simultaneously 0, decides that the point is erroneous point, then assigning three channels of the pixel is all 1.
Further, as shown in Fig. 2, the step S4 method particularly includes:
S41, the cromogram of acquisition is mapped:
RGB color is mapped to YCbCr space by following formula 1:
In YCbCr space, the Cr component of normal yellow about between 140 to 175, Cb component about 100 to Between 120, meet this rule be recognized as face skin, certainly include neck and a part of limb skin of exposing etc., because This present invention carries out prescreening using the substantially value range of CbCr component.
S42, face is tentatively extracted:
Using each pixel in the transformation space image that dimensional Gaussian model inspection S41 is obtained shown in following formula 2 Point is the probability of the colour of skin:
After the probability value for obtaining each pixel, binaryzation is carried out to the probability graph of acquisition, is as a result preliminary extract Face result;
S43, the bianry image of acquisition is handled, extracts target face rectangular area.The method of processing includes shape State student movement is calculated, filling rate, area ratio, and length-width ratio etc. judges that the factor is further processed bianry image.
Further, the step S5 method particularly includes:
S51, eigen decomposition is carried out to RGB image, is decomposed according to illumination model shown in following formula 3:
In formula 3,It is the intensity at image slices vegetarian refreshments (i, j),It is the light and shade value of pixel, ρ (i, j) Indicate multiple reflection rate, β (i, j) indicates local illumination variation impact factor;To three parametric solutions,.We are false first If β=0, ρ=1, we solve S;Then the S solved is brought into, still assumes β=0, solves ρ;It will finally solve ρ and S bring into, find out last β.
The parameter obtained in S52, solution procedure S51, specifically:
S521, light and shade value is solved
The light and shade value of image certain point is expressed as to the linear forms of the normal vector of the point shown in formula 4:
Wherein,It is the normal vector of each pixel of image,It is first four spheric harmonic function coefficient,Figure Each pixel as in may serve to solve light and shade value, and therefore, need exist for solving is that the least square coefficient of overdetermination is estimated Meter problem, following formula 5:
S522, multiple reflection rate ρ (i, j) is solved:
It is solved using energy function shown in following formula 6:
In formula 6, N is the neighborhood of certain pixel,It is weight coefficient relevant to pixel intensity value,It is and initial depth Relevant weight coefficient is spent, I is gray value, and subscript k is pixel number, it is expressed as formula 7:
In formula 7, z (i, j) is the depth value of pixel (i, j), and I (i, j) is the color-values of pixel (i, j), σcIt is to close The coefficient of pixel intensity value, σ in Mr. Yu's pixel and its neighborhooddIt is about the discontinuous related coefficient of depth;
S523, part illumination variation impact factor β (i, j) is solved:
β (i, j) is obtained by following formula 8:
S53, the surface recovery that face is carried out according to the parameter that step S52 is obtained, are minimized using energy, by following public Formula 9 solves:
In formula 9, z0It is image initial depth value (being obtained by kinect camera), Δ indicates its Laplace operator;△ It is the local derviation relationship of depth, is the expression for the normal vector of each pixel, there is relationship shown in following formula 10:
In formula 10, the subscript x and y of z respectively indicates the gradient in the direction x and the gradient in the direction y;
The method for solving of formula 9 is specifically, because the non-linear relation of normal vector and concentration gradient, produces nonlinear terms Problem is described as follows nonlinear solution throughway:
S531, fixed nonlinear terms, i.e. denominator term when solving normal vector, by this denominator term in terms of previous Result is calculated as given value and brings the calculating of normal vector this time into, shown in following formula 11:
S532, using obtained normal vector, update linear light and shine model terms, following formula 12:
S533, iterative solution, according to parameterAs f (zk-1)>f(zk) set up when, always iterate to calculate:
Until f (zk+1)>f(zk), stop iterative process, previous zkThe as obtained last knot of our institute's optimization processings Fruit.
In above scheme, weight threshold τ=0.05,λρ=0.1, Depth data is not converted into instead of between the 0-255 of grayscale image, true Depth value, as unit of mm, value range is all converted between 0-2000mm.
Further, following formula 13:
In formula 13, (i, j) is the pixel in image, [x(i,j),y(i,j),z(i,j)] it is that pixel (i, j) to be solved is three-dimensional Coordinate, zk(i, j) is its depth value, K-1It is inverting for depth camera internal reference matrix, according to transformation above, so that it may obtain three Tie up point cloud data.
Further, further includes:
S6, the processing of rough error point is removed to the three-dimensional point cloud of acquisition:
Using the Octree module in the library PCL, the removal of rough error point is carried out to the three-dimensional point cloud of acquisition.
In general, after obtaining a cloud, periphery can all generate some rough error points.To the method class of these rough error points removal Like k nearest neighbor.Assuming that giving Radius R centered on the certain point in cloud, the point number n with its neighbour being found out in R, if n Less than some threshold value, then it can determine that the point is rough error point.But it due to a cloud enormous amount, is all needed before handling it Establish spatial index.Common space index structure has a KD tree, R tree, quaternary tree, Octree etc., and in these structures, KD Tree is most widely used with Octree, and the Octree module in the library Selection utilization PCL here, and rough error point is carried out to it and is gone It removes.Class pcl::octree::OctreePointCloud is the octree structure realized for the big problem of point cloud data amount, should Class derives from many subclasses, to facilitate to the existing different processing of mysorethorn.Class pcl::octree:: OctreePointCloudSearch realizes point cloud neighbour's effective search algorithm based on Octree, the member used herein Function is radiusSearch (), mainly neighbour's point set number in acquisition radius radius, by with the threshold that is set in advance Value size is compared, and then determines whether the point is rough error point, once sentencing as rough error point, is just deleted in the Octree The point carries out filtering out work.The parameter setting of processing are as follows: the resolution ratio of octree structure is 0.000008f, and radius is set as 0.01f, neighbor point number threshold value are set as 13.
The invention has the benefit that method of the invention carries out face three-dimensional point using existing Kinect depth camera The optimization process flow of cloud avoids the expensive price of traditional high precision apparatus or the problem that general algorithm precision is not high; Inside the depth optimization the step of, when carrying out surface recovery using depth after obtained optimization, general utilization method is not taken Vector is directly merged, because the fusion of normal vector solves surface and can generate offset error, here with concentration gradient and method The geometrical relationship of vector is solved, and last depth is directly obtained, and finally obtains quality higher three using the variation of coordinate Dimension point cloud.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow diagrams of the face three-dimensional point cloud optimized treatment method of kinect camera;
Fig. 2 is depth optimization Processing Algorithm flow chart.
Specific embodiment
Summary is described in detail technical solution of the present invention, and details are not described herein.

Claims (2)

1. a kind of face three-dimensional point cloud optimized treatment method based on kinect camera, which comprises the following steps:
S1, kinect camera calibration, the inside and outside parameter including depth camera and color camera;
S2, tested facial image: the camera in applying step S1 is obtained, obtains the RGB figure and depth D figure of tested face, and will The RGB figure and depth D figure of acquisition carry out registration process, i.e., are aligned the different depth D figure of pixel size and RGB figure one by one, make The depth value of each pixel has a corresponding rgb value;
S3, the image of acquisition is pre-processed, the missing point including depth D figure, the erroneous point of RGB figure, and to depth D figure into The processing of row bilateral filtering;Method particularly includes:
For the missing point of depth D figure: assuming that missing pixel point coordinate is (i, j), from the four direction up and down of the point into Then row search averages four values until finding the first nonzero value in each direction to fill lacking for the pixel Mistake value;
For the erroneous point of RGB figure: judging the value in three channels of RGB figure respectively first, if three channels are simultaneously 0, decide that The point is erroneous point, and then assigning three channels of the pixel is all 1;
S4, face is extracted from the RGB figure of acquisition, and the face extracted is projected to the depth D figure the inside of alignment, obtain Corresponding human face region in depth D figure;Method particularly includes:
S41, the RGB figure of acquisition is mapped:
RGB color is mapped to YCbCr space by following formula 1:
S42, face is tentatively extracted:
Use in the transformation space image that dimensional Gaussian model inspection S41 is obtained shown in following formula 2 each pixel for The probability of the colour of skin:
After the probability value for obtaining each pixel, binaryzation is carried out to the probability graph of acquisition, is as a result preliminary extraction face As a result;
S43, the bianry image of acquisition is handled, extracts target face rectangular area;
S5, RGB and depth D are combined, depth optimization processing is carried out to human face region corresponding in D figure, recycles depth Depth after optimization is converted into three-dimensional point cloud by the internal reference of camera, method particularly includes:
S51, eigen decomposition is carried out to RGB image, is decomposed according to illumination model shown in following formula 3:
In formula 3,It is the intensity at image slices vegetarian refreshments (i, j), that is, grey scale pixel value,It is pixel Light and shade value, ρ (i, j) indicate multiple reflection rate, and β (i, j) indicates local illumination variation impact factor;
The parameter obtained in S52, solution procedure S51, specifically:
S521, light and shade value is solved
The light and shade value of image certain point is expressed as to the linear forms of the normal vector of the point shown in formula 4:
Wherein,It is the normal vector of each pixel of image,It is first four spheric harmonic function coefficient,In image Each pixel may serve to solve light and shade value, therefore, need exist for solve be overdetermination least square coefficient estimation ask Topic, following formula 5:
S522, multiple reflection rate ρ (i, j) is solved:
It is solved using energy function shown in following formula 6:
In formula 6, N is the neighborhood of certain pixel,It is weight coefficient relevant to pixel intensity value,It is and initial depth phase The weight coefficient of pass, I are gray values, and subscript k is pixel number, are expressed as formula 7:
In formula 7, z (i, j) is the depth value of pixel (i, j), and I (i, j) is the rgb value of pixel (i, j), σcIt is about certain picture The coefficient of element and pixel intensity value in its neighborhood, σdIt is about the discontinuous related coefficient of depth;
S523, part illumination variation impact factor β (i, j) is solved:
β (i, j) is obtained by following formula 8:
S53, the depth optimization that face is carried out according to the parameter that step S52 is obtained, are minimized using energy, pass through following formula 9 It solves:
In formula 9, z0It is the depth value that image initial obtains, Δ indicates its Laplace operator;△ is the local derviation relationship of depth, It is the expression for the normal vector of each pixel, there is relationship shown in following formula 10:
In formula 10, the subscript x and y of z respectively indicates the gradient in the direction x and the gradient in the direction y;
The method for solving of formula 9 specifically:
S531, fixed nonlinear terms, i.e. denominator term when solving normal vector, this denominator term is tied with previous calculating Fruit brings normal vector this time into as given value and calculates, shown in following formula 11:
S532, using obtained normal vector, update linear eigen decomposition illumination model item, following formula 12:
S533, iterative solution, according to parameter z0,ρ, β, as f (zk-1)>f(zk) set up when, always iterate to calculate:
Until f (zk+1)>f(zk), stop iterative process, previous zkThat is the obtained final result of optimization processing;
Described τ, σc、σd、λρFor preset weight threshold;
S54, according to the parameter of Kinect depth camera, using the conversion of coordinate system by the obtained two-dimensional depth z of S533 iterationk It is transformed into three-dimensional point cloud form, following formula 13:
In formula 13, (i, j) is the pixel in image, [x(i,j),y(i,j),z(i,j)] it is the three-dimensional seat of pixel (i, j) to be solved Mark, zk(i, j) is its depth value, K-1It is inverting for depth camera internal reference matrix.
2. a kind of face three-dimensional point cloud optimized treatment method based on kinect camera according to claim 1, feature It is, further includes:
S6, the processing of rough error point is removed to the three-dimensional point cloud of acquisition:
Using the Octree module in the library PCL, the removal of rough error point is carried out to the three-dimensional point cloud of acquisition.
CN201710464550.2A 2017-06-19 2017-06-19 A kind of face three-dimensional point cloud optimized treatment method based on kinect camera Expired - Fee Related CN107169475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710464550.2A CN107169475B (en) 2017-06-19 2017-06-19 A kind of face three-dimensional point cloud optimized treatment method based on kinect camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710464550.2A CN107169475B (en) 2017-06-19 2017-06-19 A kind of face three-dimensional point cloud optimized treatment method based on kinect camera

Publications (2)

Publication Number Publication Date
CN107169475A CN107169475A (en) 2017-09-15
CN107169475B true CN107169475B (en) 2019-11-19

Family

ID=59819441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710464550.2A Expired - Fee Related CN107169475B (en) 2017-06-19 2017-06-19 A kind of face three-dimensional point cloud optimized treatment method based on kinect camera

Country Status (1)

Country Link
CN (1) CN107169475B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948400A (en) * 2017-12-20 2019-06-28 宁波盈芯信息科技有限公司 It is a kind of to be able to carry out the smart phone and its recognition methods that face characteristic 3D is identified
CN108564041B (en) * 2018-04-17 2020-07-24 云从科技集团股份有限公司 Face detection and restoration method based on RGBD camera
CN109087347B (en) * 2018-08-15 2021-08-17 浙江光珀智能科技有限公司 Image processing method and device
CN109194943B (en) * 2018-08-29 2020-06-02 维沃移动通信有限公司 Image processing method and terminal equipment
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN111344741B (en) * 2019-01-31 2023-06-20 深圳市瑞立视多媒体科技有限公司 Data missing processing method and device for three-dimensional track data
CN110059537A (en) * 2019-02-27 2019-07-26 视缘(上海)智能科技有限公司 A kind of three-dimensional face data acquisition methods and device based on Kinect sensor
CN111696145B (en) * 2019-03-11 2023-11-03 北京地平线机器人技术研发有限公司 Depth information determining method, depth information determining device and electronic equipment
CN109816786B (en) * 2019-03-28 2022-11-25 深圳市超准视觉科技有限公司 Three-dimensional point cloud reconstruction method and device and computer equipment
CN110009676B (en) * 2019-04-11 2019-12-17 电子科技大学 Intrinsic property decomposition method of binocular image
CN110288581B (en) * 2019-06-26 2022-11-04 电子科技大学 Segmentation method based on model for keeping shape convexity level set
CN110276316B (en) * 2019-06-26 2022-05-24 电子科技大学 Human body key point detection method based on deep learning
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system
CN114913287B (en) * 2022-04-07 2023-08-22 北京拙河科技有限公司 Three-dimensional human body model reconstruction method and system
CN115953400B (en) * 2023-03-13 2023-06-02 安格利(成都)仪器设备有限公司 Corrosion pit automatic detection method based on three-dimensional point cloud object surface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN104463880A (en) * 2014-12-12 2015-03-25 中国科学院自动化研究所 RGB-D image acquisition method
CN105469042A (en) * 2015-11-20 2016-04-06 天津汉光祥云信息科技有限公司 Improved face image comparison method
WO2016107638A1 (en) * 2014-12-29 2016-07-07 Keylemon Sa An image face processing method and apparatus
CN106780592A (en) * 2016-06-30 2017-05-31 华南理工大学 Kinect depth reconstruction algorithms based on camera motion and image light and shade

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN104463880A (en) * 2014-12-12 2015-03-25 中国科学院自动化研究所 RGB-D image acquisition method
WO2016107638A1 (en) * 2014-12-29 2016-07-07 Keylemon Sa An image face processing method and apparatus
CN105469042A (en) * 2015-11-20 2016-04-06 天津汉光祥云信息科技有限公司 Improved face image comparison method
CN106780592A (en) * 2016-06-30 2017-05-31 华南理工大学 Kinect depth reconstruction algorithms based on camera motion and image light and shade

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于RGBD的人体行为识别系统;周康;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20160215(第2期);第13-14、45-47页 *

Also Published As

Publication number Publication date
CN107169475A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
CN107169475B (en) A kind of face three-dimensional point cloud optimized treatment method based on kinect camera
CN108549873B (en) Three-dimensional face recognition method and three-dimensional face recognition system
CN106803267B (en) Kinect-based indoor scene three-dimensional reconstruction method
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
CN101443817B (en) Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
GB2581374A (en) 3D Face reconstruction system and method
CN104335005B (en) 3D is scanned and alignment system
CN101996407B (en) Colour calibration method for multiple cameras
JP2009020761A (en) Image processing apparatus and method thereof
CN106157372A (en) A kind of 3D face grid reconstruction method based on video image
CN111523398A (en) Method and device for fusing 2D face detection and 3D face recognition
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN109754459B (en) Method and system for constructing human body three-dimensional model
WO2010041584A1 (en) Imaging system and method
CN110263768A (en) A kind of face identification method based on depth residual error network
CN106705849A (en) Calibration method of linear-structure optical sensor
CN110567441B (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN111144213A (en) Object detection method and related equipment
CN101976436A (en) Pixel-level multi-focus image fusion method based on correction of differential image
CN107292269A (en) Facial image false distinguishing method, storage, processing equipment based on perspective distortion characteristic
Fang et al. Laser stripe image denoising using convolutional autoencoder
CN107610219A (en) The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
CN115880415A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
Condorelli et al. A comparison between 3D reconstruction using nerf neural networks and mvs algorithms on cultural heritage images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191119

Termination date: 20200619