CN105719352B - Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment - Google Patents

Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment Download PDF

Info

Publication number
CN105719352B
CN105719352B CN201610051083.6A CN201610051083A CN105719352B CN 105719352 B CN105719352 B CN 105719352B CN 201610051083 A CN201610051083 A CN 201610051083A CN 105719352 B CN105719352 B CN 105719352B
Authority
CN
China
Prior art keywords
point cloud
face
matrix
dimensional point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610051083.6A
Other languages
Chinese (zh)
Other versions
CN105719352A (en
Inventor
黄志坚
郭裕兰
李洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Visualtouring Information Technology Co ltd
National University of Defense Technology
Original Assignee
Hunan Visualtouring Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Visualtouring Information Technology Co Ltd filed Critical Hunan Visualtouring Information Technology Co Ltd
Priority to CN201610051083.6A priority Critical patent/CN105719352B/en
Publication of CN105719352A publication Critical patent/CN105719352A/en
Application granted granted Critical
Publication of CN105719352B publication Critical patent/CN105719352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

It discloses a kind of face three-dimensional point cloud super-resolution fusion method and applies its data processing equipment.Four steps are merged by Face datection, human face posture correction, the alignment of point cloud, various visual angles surface, without being merged to each direction of illumination, the method merged by projecting to two dimensional surface greatly improves the efficiency.Due to the strategy merged again using first attitude updating, the geometrical properties of face are taken full advantage of, the precision of data has not only been maintained, but also reduce the complexity of problem.Simultaneously as not directly in three dimensions to cloud into row interpolation and filtering, but first project to two dimensional surface again into row interpolation and filtering, substantially increase efficiency of algorithm.By slightly to the point cloud Pointing strategy of essence, not only can avoid being absorbed in local optimum, but also convergence rate can be accelerated.

Description

Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment
Technical field
The present invention relates to 3-D view processing technology fields, and in particular to a kind of face three-dimensional point cloud super-resolution fusion side Method and apply its data processing equipment.
Background technology
Increasingly mature with 3 Dimension Image Technique, real-time three dimensional data collection becomes reality.Due to three-dimensional point cloud Depth data is remained, preferably can express and describe real world so that artificial intelligence system is easier to understand real generation Boundary is expected to bring the essence of machine vision to break through.
Due to the limitation of moment sensor technology, the point cloud density of single frames acquisition is inadequate, it is difficult to support to model detail Description, meanwhile, single frames point cloud can include each noise like and hole etc..Importantly, the only model that single frames point cloud obtains exists Depth information under some visual angle, not usually complete model.Therefore, multiframe point cloud data is carried out super-resolution fusion is The basis of 3D vision and key link.
Existing three-dimensional point cloud super-resolution fusion method needs the memory expended and computationally intensive, data acquisition time It is long, and height is required to the software and hardware of processing equipment, meanwhile, the resolution ratio after being merged for face three-dimensional point cloud is difficult to reach To the requirement of application.
Invention content
In view of this, the present invention provides a kind of face three-dimensional point cloud super-resolution fusion method and applies its data processing Device, reduces the calculation amount of fusion, while improving the efficiency of fusion and putting cloud resolution ratio, and high-resolution is provided for three-dimensional face identification Rate, low noise, imperforate point cloud data.
In a first aspect, a kind of face three-dimensional point cloud super-resolution fusion method is provided, including:
Multiframe 3 D human body point cloud data is obtained with scan mode, the face of people is included at least per frame 3 D human body point cloud data The point cloud data in portion;
Three-dimensional face detection is carried out respectively to the multiframe 3 D human body point cloud data, interception obtains multiple corresponding to difference The Initial Face three-dimensional point cloud of frame;
Attitude updating iteratively is carried out to each Initial Face three-dimensional point cloud, until before current iteration and after iteration The spin matrix of face three-dimensional point cloud is unit matrix;
Face three-dimensional point cloud sequence for all consecutive frames corresponding two Jing Guo attitude updating carries out coarse alignment operation, The coarse alignment operation, which calculates to obtain, makes the corresponding points that face three-dimensional point cloud dimensionality reduction samples to obtain in data set most to quantity Coarse alignment transformation matrix, wherein for corresponding points to referring in two datasets, distance is less than the point pair of 2 data resolutions;
Face three-dimensional point cloud after two coarse alignments operation corresponding to all consecutive frames carries out fine alignment operation, the essence Alignment function to face three-dimensional point cloud by iteratively carrying out point coordinates conversion until the error of face three-dimensional point cloud meets Predetermined condition;
Respectively from multiple and different visual angles respectively by it is all by fine alignment operate face three-dimensional point clouds visual point set into Row mixing operation obtains super-resolution merging point cloud, and merges super-resolution merging point cloud that multiple mixing operation obtains to obtain Super-resolution face three-dimensional point cloud.
Preferably, iteratively to each Initial Face three-dimensional point cloud carry out attitude updating, until current iteration before and The spin matrix of face three-dimensional point cloud after iteration is that unit matrix includes:
Mean value and covariance matrix are calculated according to the point cloud vector of the face three-dimensional point cloud before iteration, wherein before iteration The point cloud vector of face three-dimensional point cloud is P=[P1,P2,...,Pk,...,Pn], PkFor k-th point of coordinate vector, n is face The point quantity of three-dimensional point cloud;
To covariance matrix carry out SVD decompose obtain feature vector composition matrix and eigenvalue cluster at diagonal matrix;
The point cloud vector of the face three-dimensional point cloud after attitude updating acquisition iteration is carried out based on following formula:
P'=V (P-m)
Wherein, P' is the face three-dimensional point cloud after iteration, and P is the face three-dimensional point cloud before iteration, and V forms for feature vector Matrix, m be the mean value;
Above-mentioned iterative step is repeated until the matrix of feature vector composition is unit matrix.
Preferably, the coarse alignment, which operates, includes:
Using the corresponding face three-dimensional point cloud by attitude updating of previous frame as references object, with after frame it is corresponding without The face three-dimensional point cloud for crossing attitude updating is that regulating object carries out characteristic matching, obtains multiple matched key points pair;
For each pair of key point to calculating corresponding point transfer matrix;
For the point transfer matrix of each key point pair, calculate between the point transfer matrix of other all key points pair Matrix distance, the point transfer matrix progress least square fitting for choosing multiple key points pair that matrix distance meets thresholding constraint obtain Take alternative point transfer matrix;
Point transformation is carried out to regulating object with each alternative point transfer matrix and calculates corresponding points quantity;
The corresponding points of the selection alternative point transfer matrix most to quantity is as coarse alignment transformation matrix.
Preferably, the fine alignment, which operates, includes:
To be used as with reference to object, with corresponding in rear frame in the corresponding face three-dimensional point cloud by coarse alignment operation of previous frame Face three-dimensional point cloud by coarse alignment operation is regulating object, regulating object is projected to references object, projecting direction is pre- Fixed direction of illumination;
For a subset of the projection point set of regulating object, is concentrated in the subpoint of references object and search for nearest point set;
It is calculated according to the subset and nearest point set and obtains fine alignment transformation matrix;
The all the points of regulating object are converted according to fine alignment transformation matrix, and calculate the point cloud after transformation and reference The error of object, the error be transformation after point to reference point clouds on comprising its subpoint tangent plane square distance and;
Above-mentioned iterative step is repeated until the error is less than predetermined threshold.
Preferably, the mixing operation includes:
The visual point set at corresponding visual angle is projected into the corresponding coordinate plane in the visual angle;
By the region rasterizing of the coordinate plane, by the point in fallen with same grid merge into a merging point with Obtain raster data, the merging point perpendicular to the coordinate plane coordinate for fallen with grid point the direction coordinate Mean value;
To the raster data in the plane into row interpolation and filtering;
The raster data is mapped to three dimensions to obtain super-resolution merging point cloud.
Preferably, the multiple different visual angles include LOOK LEFT, LOOK RIGHT and preceding visual angle.
Second aspect, provides a kind of data processing equipment, including processor, and the processor is adapted for carrying out as described above Method.
Four steps are merged by Face datection, human face posture correction, the alignment of point cloud, three visual angle surfaces, without to each Direction of illumination is merged, and the method merged by projecting to two dimensional surface greatly improves the efficiency.Due to using first appearance State corrects the strategy merged again, takes full advantage of the geometrical properties of face, has not only maintained the precision of data, but also reduces and ask The complexity of topic.Simultaneously as not directly in three dimensions to cloud into row interpolation and filtering, but first project to two dimensional surface Again into row interpolation and filtering, efficiency of algorithm is substantially increased.It, to the point cloud Pointing strategy of essence, both can avoid being absorbed in part most by slightly It is excellent, and convergence rate can be accelerated.
Description of the drawings
By referring to the drawings to the description of the embodiment of the present invention, the above and other purposes of the present invention, feature and Advantage will be apparent from, in the accompanying drawings:
Fig. 1 is the flow chart of the face three-dimensional point cloud super-resolution fusion method of the embodiment of the present invention;
Fig. 2 is the three-dimensional face point cloud carried out in the embodiment of the present invention in face three-dimensional point cloud super-resolution fusion process Schematic diagram;
Fig. 3 is the flow chart that human face posture correction is carried out in the embodiment of the present invention;
Fig. 4 is the schematic diagram that the fusion of various visual angles face three-dimensional point cloud is carried out in the embodiment of the present invention.
Specific implementation mode
Below based on embodiment, present invention is described, but the present invention is not restricted to these embodiments.Under Text to the present invention datail description in, it is detailed to describe some specific detail sections.Do not have for a person skilled in the art The description of these detail sections can also understand the present invention completely.In order to avoid obscuring the essence of the present invention, well known method, mistake There is no narrations in detail for journey, flow, element and circuit.
In addition, it should be understood by one skilled in the art that provided herein attached drawing be provided to explanation purpose, and What attached drawing was not necessarily drawn to scale.
Unless the context clearly requires otherwise, "include", "comprise" otherwise throughout the specification and claims etc. are similar Word should be construed as the meaning for including rather than exclusive or exhaustive meaning;That is, being containing for " including but not limited to " Justice.
In the description of the present invention, it is to be understood that, term " first ", " second " etc. are used for description purposes only, without It can be interpreted as indicating or implying relative importance.In addition, in the description of the present invention, unless otherwise indicated, the meaning of " multiple " It is two or more.
Fig. 1 is the flow chart of the face three-dimensional point cloud super-resolution fusion method of the embodiment of the present invention.
As shown in Figure 1, the method includes:
Step S100, multiframe 3 D human body point cloud data is obtained with scan mode, at least per frame 3 D human body point cloud data The point cloud data of face including people.
Step S200, three-dimensional face detection is carried out respectively to the multiframe 3 D human body point cloud data, interception obtains multiple Corresponding to the Initial Face three-dimensional point cloud of different frame.
Step S300, attitude updating iteratively is carried out to each Initial Face three-dimensional point cloud, until before current iteration Spin matrix with the face three-dimensional point cloud after iteration is unit matrix.
Step S400, the face three-dimensional point cloud sequence for all consecutive frames corresponding two Jing Guo attitude updating carries out thick Alignment function, the coarse alignment operation, which calculates, to be obtained so that face three-dimensional point cloud dimensionality reduction samples to obtain the corresponding points pair in data set The most coarse alignment transformation matrix of quantity, wherein for corresponding points to referring in two datasets, distance is less than 2 data resolutions Point pair.
Step S500, the face three-dimensional point cloud after two coarse alignments operation corresponding to all consecutive frames carries out fine alignment behaviour Make, the fine alignment operation carries out the error of face three-dimensional point cloud of the multiple point coordinates conversion after iteration by iterative manner Meet predetermined condition.
Step S600, respectively from multiple and different visual angles respectively by it is all by fine alignment operate face three-dimensional point clouds can Viewpoint collection carries out mixing operation, and the merging super-resolution merging point cloud that mixing operation obtains three times is to obtain super-resolution face Three-dimensional point cloud, the mixing operation are merged with the resolution ratio higher than 3 D human body point cloud data in different face three-dimensional point clouds Point is to obtain super-resolution merging point cloud.
Fig. 2 is the three-dimensional face point cloud carried out in the embodiment of the present invention in face three-dimensional point cloud super-resolution fusion process Schematic diagram.As can be seen from FIG. 2, in the face three-dimensional point cloud super-resolution fusion process, multiframe three-dimensional point cloud passes through face Detection, human face posture correct, by that slightly can obtain the resolution beyond sensor to smart two beans-and bullets shooter clouds alignment, the fusion of various visual angles surface The super-resolution face three-dimensional point cloud of rate.
By Face datection, human face posture correction, by slightly merging four steps to smart two beans-and bullets shooter clouds alignment, various visual angles surface Suddenly, without being merged to each direction of illumination, the method merged by projecting to two dimensional surface substantially increases effect Rate.Due to the strategy merged again using first attitude updating, the geometrical properties of face are taken full advantage of, data are both maintained Precision, and reduce the complexity of problem.Simultaneously as not directly in three dimensions to cloud into row interpolation and filtering, but Two dimensional surface is first projected to again into row interpolation and filtering, substantially increases efficiency of algorithm.Meanwhile plan is directed at by the point cloud slightly to essence Slightly, it not only can avoid being absorbed in local optimum, but also convergence rate can be accelerated.
For step S100, can be obtained by moving three dimension image by any existing acquiring three-dimensional images equipment Equipment shooting obtains the multiframe 3 D human body point cloud data for embodying the multiple angle characters of face.
For step S200, existing various face recognition algorithms may be used and come from each frame 3 D human body point cloud data Middle interception face part makes subsequent cloud fusion be absorbed in human face region to obtain face three-dimensional point cloud.Due to only with Regional area in scene is merged, and the memory overhead and computation complexity needed for method substantially reduce, and are expected in part Carry out the super-resolution fusion of higher precision.
In a preferred embodiment, document H.Wang, C.Wang, H.Luo et al..3-D Point may be used Cloud Object Detection based on Supervoxel Neighborhood with Hough Forest Framework[J].IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.2015,8(4):The side of the Hough forest model test point cloud target proposed in 1570-1581. Method carries out three-dimensional face detection.
For step S300, existing various face recognition algorithms may be used to carry out human face posture correction.
Preferably, document A.S.Mian, M.Bennamoun and R.Owens.An efficient may be used multimodal 2D-3D hybrid approach to automatic face recognition[J].IEEE Trans.Pattern Anal.Mach.Intell..2007,29(11):1927-1943. the PCA methods of middle proposition carry out face Attitude updating.Specifically, the flow chart of human face posture correction is as shown in figure 3, step S300 includes:
Step S310, mean value m and covariance matrix C is calculated according to the point cloud vector of the face three-dimensional point cloud before iteration, In, P=[P1,P2,...,Pk,...,Pn], PkFor k-th point of coordinate vector, n is the point quantity of face three-dimensional point cloud.
That is,With
Step S320, to covariance matrix carry out SVD decompose obtain feature vector composition matrix and eigenvalue cluster at Covariance matrix is also decomposed by diagonal matrix:CV=DV, wherein V be feature vector composition matrix (it is spin moment Battle array).
Step S330, the point cloud vector of the face three-dimensional point cloud after attitude updating acquisition iteration is carried out based on following formula:
P'=V (P-m)
Wherein, P' is the face three-dimensional point cloud after iteration, and P is the face three-dimensional point cloud before iteration, m is the mean value.
Step S340, judge whether spin matrix is unit matrix, if it is not, then return to step S310 is carried out next time Otherwise iteration completes human face posture correction.
For step S400 and step S500, in two steps using from thick to thin the step of carry out.
Specifically, in step S400, using the face three-dimensional point cloud of the 1st frame as object is referred to, with the face three of the 2nd frame Dimension point cloud is as regulating object, and adjustment regulating object is so that it is aligned with references object, then, then with the 2nd frame after coarse alignment Face three-dimensional point cloud be references object, using the face three-dimensional point cloud of the 3rd frame as regulating object, progress the 3rd frame alignment, according to It is secondary to analogize, until the corresponding face three-dimensional point cloud of all frames is aligned.
Preferably, consistent corresponding proof method (Consistent Correspondences may be used in S400 Verification, CCV) (Guo Yulan, point cloud local feature description and automatic object reconstruction identification technology research [D], 2015, The National University of Defense technology, Changsha) carry out coarse alignment, it specifically includes:
Step S410, using the corresponding face three-dimensional point cloud by attitude updating of previous frame as references object, with the frame after The corresponding face three-dimensional point cloud by attitude updating is that regulating object carries out characteristic matching, obtains multiple matched key points pair
Step S420, for each pair of key point to calculating corresponding point transfer matrix.
For key point pairFor kth to key point to direct transformation matrix.
Step S430, for the point transfer matrix of each key point pair, the point transformation with other all key points pair is calculated Matrix distance between matrix, the point transfer matrix for choosing multiple key points pair that matrix distance meets thresholding constraint carry out minimum Two, which multiply fitting, obtains alternative point transfer matrix.
The thresholding constraint is actually whether detection matrix distance is sufficiently small, that is, obtaining multiple points similar enough It is average to seek it by least square fitting for transformation matrix.It will seek the average alternately point transfer matrix obtained.
Step S440, point transformation is carried out to regulating object with each alternative point transfer matrix and calculates corresponding points quantity.Such as Upper described, corresponding points are in two datasets, distance is less than the point pair of 2 data resolutions.
Step S450, the corresponding points of the selection alternative point transfer matrix most to quantity is as coarse alignment transformation matrix.
That is, selection one is so that the maximum alternative point transfer matrix of corresponding points quantity comes after whole progress coarse alignment operation Carry out coarse alignment.
Preferably, the present invention improves the fine alignment of step S500, using improved iteration nearest neighbor algorithm, tool Body, step S500 includes:
Step S510, to be used as with reference to object in the corresponding face three-dimensional point cloud by coarse alignment operation of previous frame, with The corresponding face three-dimensional point cloud by coarse alignment operation of frame is regulating object afterwards, and regulating object is projected to reference pair image point Cloud, projecting direction are predetermined direction of illumination.
Step S520, for a subset of the projection point set of regulating object, search is concentrated in the subpoint of references object Nearest point set.
Step S530, it is calculated according to the subset and nearest point set and obtains fine alignment transformation matrix.
Step S540, all the points of regulating object are converted according to fine alignment transformation matrix, and calculate transformation after The error of point cloud and references object, the error are cutting flat with comprising its subpoint on the point to reference point clouds of the point cloud after transformation The square distance in face and.
Step S550, whether error in judgement is less than predetermined threshold, if it is, return to step S510 is changed next time Otherwise in generation, terminates smart calibration operation.
Above-mentioned iterative step is repeated until the error is less than predetermined threshold.
Since the spot projection in space has been arrived camera plane, thus, it is possible to accelerate to find the speed of nearest point set.Together When, optimization object function is the point of all corresponding points pair to the sum of the distance of plane, rather than the distance of point-to-point.It is arrived using point The distance of plane can accelerate convergence rate.It, can be linear by nonlinear object function when camera movement speed is less fast Change, finally can obtain transformation matrix with the form of regular equation.Calculation amount is greatly decreased in terms of two as a result,.
For step S600, the embodiment of the present invention carries out fusion from three different visual angles and obtains three super-resolution fusions Point cloud, then three super-resolution merging points are converged as super-resolution face three-dimensional point cloud.It only need to be enterprising on boundary when integrated Row consistency treatment.Specifically, as shown in figure 4, step S600 is merged from preceding visual angle, LOOK LEFT and LOOK RIGHT.Every Visible dots cloud is only merged in a visual angle, and for the face point cloud of regularization, left view point is actually left half of face Point cloud data fusion, corresponding right viewpoint and preceding viewpoint it is corresponding be right half of face and positive face.In the fusion of a certain viewing angles Operation specifically includes:
Step S610, the visual point set at corresponding visual angle is projected into the corresponding coordinate plane in the visual angle.
By taking LOOK LEFT as an example, in this step, visual point set is projected into yoz planes.
Step S620, by the region rasterizing of the coordinate plane, the point in fallen with same grid is merged into one A merging point to obtain raster data, the merging point perpendicular to the coordinate plane coordinate for fallen with grid point at this The mean value of the coordinate in direction.
In the example merged from LOOK LEFT, all points are merged by the way that yoz planar gates are formatted, meanwhile, quilt The x coordinate of combined point is the mean value of the x coordinate of all points in the grid.
The operation for carrying out same place merging in other directions is similar with aforesaid way.
Step S630, to the raster data in the plane into row interpolation and filtering.
Specifically, in the example merged from LOOK LEFT, cubic algorithms may be used to the grid in yoz planes Data carry out smothing filtering to eliminate hole and reduce noise into row interpolation.Not directly in three-dimensional space in the embodiment of the present invention Between to cloud into row interpolation and filtering, but first project to two dimensional surface again into row interpolation and filtering, substantially increase algorithm effect Rate.
Step S640, the raster data is mapped to three dimensions to obtain super-resolution merging point cloud.
It is merged as a result, from multiple visual angles, without being merged to each direction of illumination, by projecting to two dimensional surface The method merged, greatly improves the efficiency.
Above-mentioned method and apparatus can be applied to data processing system, be executed by its processor.In specific embodiment party Data structure and code described in formula are generally stored inside on computer readable storage medium, can store for calculating Any equipment or medium for the code and/or data that machine system uses.Computer readable storage medium is including but not limited to volatile Property memory, nonvolatile memory, magnetism and optical storage apparatus, such as disc driver, tape, CD (CD), DVD (digital versatile disc or digital video disk) or currently known or that develops later be capable of store code and/or data Other media.
Can by specific embodiment part describe method and process be embodied as code and/or data, the code and/ Or data are storable in computer readable storage medium as described above.When computer system is read and is executed computer-readable When the code and/or data stored on storage medium, computer system execution is embodied as data structure and code and is stored in Method and process in computer readable storage medium.
Furthermore, it is possible to include in hardware module or device by method described herein and process.These modules or device It can include but is not limited to application-specific integrated circuit (ASIC) chip, field programmable gate array (FPGA), executed in specific time The special or shared processor of specific software module or one section of code and/or other are currently known or that develops later programmable patrols Collect equipment.When activating hardware module or device, they execute the method and process being included therein.
The foregoing is merely the preferred embodiment of the present invention, are not intended to restrict the invention, for those skilled in the art For, the present invention can have various modifications and changes.It is all within spirit and principles of the present invention made by any modification, equivalent Replace, improve etc., it should all be included in the protection scope of the present invention.

Claims (6)

1. a kind of face three-dimensional point cloud super-resolution fusion method, including:
Multiframe 3 D human body point cloud data is obtained with scan mode, the face of people is included at least per frame 3 D human body point cloud data Point cloud data;
Three-dimensional face detection carried out respectively to the multiframe 3 D human body point cloud data, interception obtains multiple corresponding to different frame Initial Face three-dimensional point cloud;
Attitude updating is carried out to each Initial Face three-dimensional point cloud with the first iterative manner, until before current iteration and after iteration The spin matrix of face three-dimensional point cloud is unit matrix;
Face three-dimensional point cloud sequence for all consecutive frames corresponding two Jing Guo attitude updating carries out coarse alignment operation, described Coarse alignment operation, which calculates, to be obtained so that face three-dimensional point cloud dimensionality reduction samples to obtain thick most to quantity of the corresponding points in data set It is directed at transformation matrix, wherein for corresponding points to referring in two datasets, distance is less than the point pair of 2 data resolutions;
Face three-dimensional point cloud after two coarse alignments operation corresponding to all consecutive frames carries out fine alignment operation, the fine alignment Operation to face three-dimensional point cloud in a manner of secondary iteration by carrying out point coordinates conversion until the error of face three-dimensional point cloud meets Predetermined condition;
The visual point set of all face three-dimensional point clouds operated by fine alignment is subjected to fusion behaviour respectively from multiple and different visual angles Make to obtain super-resolution merging point cloud, and merges super-resolution merging point cloud that multiple mixing operation obtains to obtain super-resolution Face three-dimensional point cloud;
The mixing operation includes:
The visual point set at corresponding visual angle is projected into the corresponding coordinate plane in the visual angle;
By the region rasterizing of the coordinate plane, the point in fallen with same grid is merged into a merging point to obtain Raster data, the merging point perpendicular to the coordinate plane coordinate for fallen with grid point perpendicular to the coordinate plane Coordinate mean value;
To the raster data in the coordinate plane into row interpolation and filtering;
The raster data is mapped to three dimensions to obtain super-resolution merging point cloud.
2. face three-dimensional point cloud super-resolution fusion method according to claim 1, which is characterized in that with the first iteration side Formula carries out attitude updating to each Initial Face three-dimensional point cloud, until the rotation of the face three-dimensional point cloud before current iteration and after iteration Torque battle array is that unit matrix includes:
Mean Matrix and covariance matrix are calculated according to the point cloud vector of the face three-dimensional point cloud before iteration, wherein before iteration The point cloud vector of face three-dimensional point cloud is P=[P1,P2,...,Pk,...,Pn], PkFor k-th point of coordinate vector, n is face The point quantity of three-dimensional point cloud;
To covariance matrix carry out SVD decompose obtain feature vector composition matrix and eigenvalue cluster at diagonal matrix;
The point cloud vector of the face three-dimensional point cloud after attitude updating acquisition iteration is carried out based on following formula:
P'=V (P-m)
Wherein, P' is the face three-dimensional point cloud after iteration, and P is the face three-dimensional point cloud before iteration, and V is the square of feature vector composition Battle array, i.e., the described spin matrix, m are the Mean Matrix;
Judge whether the spin matrix is unit matrix;
If the spin matrix is unit matrix, the attitude updating is completed;
If the spin matrix is non-unity matrix, the point cloud vector according to the face three-dimensional point cloud before iteration is returned The step of calculating Mean Matrix and covariance matrix carries out next iteration.
3. face three-dimensional point cloud super-resolution fusion method according to claim 1, which is characterized in that the coarse alignment behaviour Work includes:
Using the corresponding face three-dimensional point cloud by attitude updating of previous frame as references object, with after frame pass through attitude updating Face three-dimensional point cloud be that regulating object carries out characteristic matching, obtain multiple matched key points pair;
For each pair of key point to calculating corresponding point transfer matrix;
For the point transfer matrix of each key point pair, the matrix between the point transfer matrix of other all key points pair is calculated Distance, the point transfer matrix progress least square fitting acquisition for choosing multiple key points pair that matrix distance meets thresholding constraint are standby Reconnaissance transformation matrix;
Point transformation is carried out to regulating object with each alternative point transfer matrix and calculates corresponding points quantity;
The corresponding points alternative point transfer matrix most to quantity is chosen as coarse alignment transformation matrix.
4. face three-dimensional point cloud super-resolution fusion method according to claim 1, which is characterized in that the fine alignment behaviour Work includes:
To be used as with reference to object, in the corresponding process of rear frame in the corresponding face three-dimensional point cloud by coarse alignment operation of previous frame The face three-dimensional point cloud of coarse alignment operation is regulating object, regulating object is projected to references object, projecting direction is scheduled Direction of illumination;
For a subset of the projection point set of regulating object, is concentrated in the subpoint of references object and search for nearest point set;
It is calculated according to the subset and nearest point set and obtains fine alignment transformation matrix;
The all the points of regulating object are converted according to fine alignment transformation matrix, and calculate point cloud and references object after transformation Error, the error be transformation after point to reference point clouds on comprising its subpoint tangent plane square distance and;
Judge whether the error is less than predetermined threshold;
If the error is less than predetermined threshold, terminate the fine alignment operation;
If the error is greater than or equal to predetermined threshold, described the step of regulating object is projected into references object is returned, Carry out next iteration.
5. face three-dimensional point cloud super-resolution fusion method according to claim 1, which is characterized in that the multiple difference Visual angle includes LOOK LEFT, LOOK RIGHT and preceding visual angle.
6. a kind of data processing equipment, including processor, the processor is adapted for carrying out as described in any one of claim 1-5 Method.
CN201610051083.6A 2016-01-26 2016-01-26 Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment Active CN105719352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610051083.6A CN105719352B (en) 2016-01-26 2016-01-26 Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610051083.6A CN105719352B (en) 2016-01-26 2016-01-26 Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment

Publications (2)

Publication Number Publication Date
CN105719352A CN105719352A (en) 2016-06-29
CN105719352B true CN105719352B (en) 2018-10-19

Family

ID=56154094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610051083.6A Active CN105719352B (en) 2016-01-26 2016-01-26 Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment

Country Status (1)

Country Link
CN (1) CN105719352B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910102A (en) * 2016-07-25 2017-06-30 湖南拓视觉信息技术有限公司 The virtual try-in method of glasses and device
CN106909875B (en) * 2016-09-12 2020-04-10 湖南拓视觉信息技术有限公司 Face type classification method and system
US10262243B2 (en) * 2017-05-24 2019-04-16 General Electric Company Neural network point cloud generation system
CN107703499B (en) * 2017-08-22 2020-11-24 北京航空航天大学 Point cloud error correction method based on self-made foundation laser radar alignment error
CN110060336A (en) * 2019-04-24 2019-07-26 北京华捷艾米科技有限公司 Three-dimensional facial reconstruction method, device, medium and equipment
CN110728172B (en) * 2019-08-23 2022-10-18 北京迈格威科技有限公司 Point cloud-based face key point detection method, device and system and storage medium
WO2021077315A1 (en) * 2019-10-23 2021-04-29 Beijing Voyager Technology Co., Ltd. Systems and methods for autonomous driving
CN111079684B (en) * 2019-12-24 2023-04-07 陕西西图数联科技有限公司 Three-dimensional face detection method based on rough-fine fitting
CN111160208B (en) * 2019-12-24 2023-04-07 陕西西图数联科技有限公司 Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model
CN111815768B (en) * 2020-09-14 2020-12-18 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
A Two-Phase Weighted Collaborative Representation for 3D partial face recognition with single sample;Yinjie Lei et al.;《Pattern recognition》;20151023;第218-237页 *
Accelerated Coherent Point Drift for Automatic Three-Dimensional Point Cloud Registration;Min Lu et al.;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20151222;第13卷(第2期);第162-166页 *
An Accurate and Robust Range Image Registration Algorithm for 3D Object Modeling;Yulan Guo et al.;《IEEE TRANSACTIONS ON MULTIMEDIA》;20140831;第16卷(第5期);第1377-1390页 *
An Integrated Framework for 3-D Modeling, Object Detection, and Pose Estimation From Point-Clouds;Yulan Guo et al.;《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》;20150331;第64卷(第3期);第683-693页 *
Yinjie Lei et al..An efficient 3D face recognition approach using local geometrical signatures.《Pattern Recognition》.2013,第509-524页. *
基于投影分布熵的多视点三维点云场景拼接方法;谭志国 等;《中国激光》;20121130;第39卷(第11期);第1-8页 *
纹理一致的多视角三维人脸点云配准与融合;黄宏博;《软件导刊》;20150930;第14卷(第9期);第49-51页 *
针对表情变化的三维人脸识别系统研究;朱冰莲 等;《仪器仪表学报》;20140228;第35卷(第2期);第299-304页 *

Also Published As

Publication number Publication date
CN105719352A (en) 2016-06-29

Similar Documents

Publication Publication Date Title
CN105719352B (en) Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment
Li et al. Vehicle detection from 3d lidar using fully convolutional network
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
EP3977346A1 (en) Simultaneous localization and mapping method, device, system and storage medium
CN110852182B (en) Depth video human body behavior recognition method based on three-dimensional space time sequence modeling
CN110097553A (en) The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system
CN110706248A (en) Visual perception mapping algorithm based on SLAM and mobile robot
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN110688905B (en) Three-dimensional object detection and tracking method based on key frame
CN105654492A (en) Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
Yang et al. A multi-task Faster R-CNN method for 3D vehicle detection based on a single image
CN104899590A (en) Visual target tracking method and system for unmanned aerial vehicle
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN104036546A (en) Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
CN111998862B (en) BNN-based dense binocular SLAM method
CN104751493A (en) Sparse tracking method on basis of gradient texture features
CN117115359B (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
Bu et al. Semi-direct tracking and mapping with RGB-D camera for MAV
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
KR102372298B1 (en) Method for acquiring distance to at least one object located in omni-direction of vehicle and vision device using the same
CN108694348B (en) Tracking registration method and device based on natural features
Huang et al. Multi‐class obstacle detection and classification using stereovision and improved active contour models
CN117132952A (en) Bird's eye view angle vehicle perception system based on many cameras
CN116843867A (en) Augmented reality virtual-real fusion method, electronic device and storage medium
Guo et al. Efficient planar surface-based 3D mapping method for mobile robots using stereo vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230906

Address after: 3/F, Building 19, Phase 1, Changsha Zhongdian Software Park, No. 39 Jianshan Road, Changsha High tech Development Zone, Changsha City, Hunan Province, 410205

Patentee after: HUNAN VISUALTOURING INFORMATION TECHNOLOGY Co.,Ltd.

Patentee after: National University of Defense Technology

Address before: 410205 A645, room 39, Changsha central software park headquarters, No. 39, Jian Shan Road, hi tech Development Zone, Hunan.

Patentee before: HUNAN VISUALTOURING INFORMATION TECHNOLOGY Co.,Ltd.