CN105844582B - The register method and device of 3D rendering data - Google Patents
The register method and device of 3D rendering data Download PDFInfo
- Publication number
- CN105844582B CN105844582B CN201510019073.XA CN201510019073A CN105844582B CN 105844582 B CN105844582 B CN 105844582B CN 201510019073 A CN201510019073 A CN 201510019073A CN 105844582 B CN105844582 B CN 105844582B
- Authority
- CN
- China
- Prior art keywords
- data
- target
- rendering data
- transformation
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention discloses the register methods and device of a kind of 3D rendering data, which comprises source 3D rendering data are based on key point and target 3D rendering data carry out rigid alignment, obtain the first transformation data;First transformation data are based on to the progress of key point and target 3D rendering data is non-rigid to be aligned, obtain the second transformation data;Transformation matrix of the second transformation data of iterative calculation to target 3D rendering data;Source 3D rendering data are completed according to the transformation matrix iterated to calculate out to register to the 3D of target 3D rendering data.Due to by the rigid alignment based on key point and non-rigid alignment schemes R. concomitans, a preferable initial transformation can be obtained, calculating, which is iterated, based on the initial transformation solves transformation matrix, source 3D rendering data can higher be corresponded to target 3D rendering data at more proper, the smooth, goodness of fit, to improve registration precision, so that registering result is more smooth, it is more identical with target data.
Description
Technical field
The present invention relates to computer vision fields, specifically, the present invention relates to a kind of register methods of 3D rendering data
And device.
Background technique
With the fast development of 3 dimension (3D) image data acquiring equipment, 3D rendering data are widely used in multiple fields, especially
It is computer vision field.
3D registration is to handle a common method of 3D rendering data, can be applied to object, scene rebuilding, computer aided manufacturing
The organ helped in medical treatment models, recognition of face, the 3D face modeling of animation, robot navigation, object identification etc..It is infused by 3D
Volume can find the transformation between the corresponding relationship and 3D rendering data of the point between two (or multiple) 3D rendering data and close
System;Based on obtained corresponding relationship and transformation relation, so as to which 3D rendering data are aligned.
A kind of register method of existing 3D rendering data is to use key based on the position of key point in 3D rendering data
Point is split 3D rendering data, and the transformation for dividing the point in block is determined by the transformation for constructing the key point of the segmentation block.So
And this method depends critically upon the precision of key point, other non-key points key point that places one's entire reliance upon is converted, so that the party
Method precision is very low, and the identical property of registering result and target data is poor.
The register method of existing another kind 3D rendering data is based on non-rigid ICP (Iterative Closest
Point the method) being aligned, this method is by minimization one based on smoothness constraint, key point constraint and corresponding points
The energy function of distance restraint, to acquire between two 3D rendering data (i.e. source 3D rendering data and target 3D rendering data)
Transformation relation.However, the precision of key point is affected to registering result in this method, the weight setting of key point constraint does not conform to
Reason is easy so that source 3D rendering data and the alignment difficult to realize of target 3D rendering data.Moreover, the single-point based on key point is about
Beam, coverage are small;In the case where source data and target data differ greatly, key point single-point is not enough to drive key point all
The point enclosed generates reasonably similar transformation, is easy to appear the very big situation of transformation difference of key point and surrounding point.
In conclusion the register method of the 3D rendering data of the prior art, source 3D rendering data are transformed to target 3D rendering
The precision of data is lower, and registering result and the identical property of target 3D rendering data are poor.
Summary of the invention
The purpose of the present invention aims to solve at least one of above-mentioned technological deficiency, especially improves registration precision, so that registration
As a result more smooth, registering result and target data are more identical.
The present invention provides a kind of register methods of 3D rendering data, comprising:
Source 3D rendering data are based on key point and target 3D rendering data carry out rigid alignment, obtain the first transformation number
According to;
First transformation data are based on to the progress of key point and target 3D rendering data is non-rigid to be aligned, obtain the second transformation number
According to;
Transformation matrix of the second transformation data of iterative calculation to target 3D rendering data;
Source 3D rendering data are completed according to the transformation matrix to register to the 3D of target 3D rendering data.
The present invention also provides a kind of register devices of 3D rendering data, comprising:
Rigid alignment unit, it is right for source 3D rendering data to be carried out rigidity based on key point and target 3D rendering data
Together, the first transformation data are obtained;
Non-rigid alignment unit, it is non-rigid for carrying out the first transformation data based on key point and target 3D rendering data
Alignment, obtains the second transformation data;
Transformation matrix computing unit, the transformation matrix for iterating to calculate the second transformation data to target 3D rendering data;
And 3D rendering data in source are completed according to calculated transformation matrix and are registered to the 3D of target 3D rendering data.
In the solution of the present invention, when carrying out the registration of 3D rendering data, to source 3D rendering data and target 3D rendering number
After carrying out rigid alignment based on key point, then non-rigid alignment is carried out, so that source 3D rendering data and target 3D rendering data
Key point correspond to well, and obtain a preferable initial transformation;Later, based on obtained preferable initial transformation,
Row iteration calculating again solves transformation matrix, completes source 3D rendering data according to the transformation matrix iterated to calculate out and schemes to target 3D
As the 3D of data is registered.To which the transformation matrix that method of the invention obtains can be more proper, flat by source 3D rendering data
The sliding, goodness of fit higher corresponds to target 3D rendering data, so that registration precision when 3D rendering registration is improved, so that registration knot
Fruit is more smooth, and registering result and target data are more identical.
Further, in technical solution of the present invention, position can also be carried out to the key point marked in target 3D rendering data
Optimization is set, the precision of key point is improved, and then is conducive to improve registration precision;Moreover, the present invention solves transformation in iterative calculation
When matrix, on the basis of key point single-point constraint, the inequality constraints of key point field is increased, so that remoter apart from key point
Neighborhood point, transformation and the dependence of crucial point transformation are weaker, so that registration precision is further improved, so that registering
As a result more smooth, registering result and target data are more identical.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow diagram of the key point mask method of the 3D rendering data of the embodiment of the present invention;
Fig. 2 is the schematic diagram of the key point marked in 3D rendering data;
Fig. 3 is the flow diagram of the register method of the 3D rendering data of the embodiment of the present invention;
Fig. 4 a is the curved surface picture of source 3D human face data;
Fig. 4 b is the texture picture of target 3D human face data;
Fig. 4 c is the texture-free curved surface picture of target 3D human face data;
Fig. 5 a is the texture picture registered using OSICP method;
Fig. 5 b is the texture-free curved surface picture registered using OSICP method;
Fig. 5 c is the texture picture registered using register method of the invention;
Fig. 5 d is the texture-free curved surface picture registered using register method of the invention;
Fig. 6 is the internal structure block diagram of the register device of the 3D rendering data of the embodiment of the present invention;
Fig. 7 is the schematic diagram of internal structure of the key point position optimization module of the embodiment of the present invention;
Fig. 8 is the schematic diagram of internal structure of the transformation matrix computing unit of the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention
Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition
Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member
Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be
Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or wirelessly coupling.It is used herein to arrange
Diction "and/or" includes one or more associated wholes for listing item or any cell and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication
The equipment of number receiver, only has the equipment of the wireless signal receiver of non-emissive ability, and including receiving and emitting hardware
Equipment, have on bidirectional communication link, can carry out two-way communication reception and emit hardware equipment.This equipment
It may include: honeycomb or other communication equipments, shown with single line display or multi-line display or without multi-line
The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), can
With combine voice, data processing, fax and/or communication ability;PDA (Personal Digital Assistant, it is personal
Digital assistants), it may include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day
It goes through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm
Type computer or other equipment, have and/or the conventional laptop including radio frequency receiver and/or palmtop computer or its
His equipment." terminal " used herein above, " terminal device " can be it is portable, can transport, be mounted on the vehicles (aviation,
Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth
And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communication terminal, on
Network termination, music/video playback terminal, such as can be PDA, MID (Mobile Internet Device, mobile Internet
Equipment) and/or mobile phone with music/video playing function, it is also possible to the equipment such as smart television, set-top box.
In the present invention, carry out 3D rendering data registration when, by based on key point rigid alignment and non-rigid alignment
Method R. concomitans after carrying out rigid alignment to source 3D rendering data and target 3D rendering data, then carry out non-rigid alignment,
So that the key point of source 3D rendering data and target 3D rendering data corresponds to well, and available one preferable
Initial transformation makes source 3D rendering data and target 3D rendering data be for conversion into the data being closer to;Later, it is obtained based on previous
The preferable initial transformation arrived, then be iterated calculating and solve transformation matrix, it is available compared to using source 3D rendering number
According to or poor initial transformation be iterated and calculate the transformation matrix that solves, there is better stability and accuracy.It is based on
The transformation matrix that method of the invention obtains more proper, the smooth, goodness of fit can higher correspond to source 3D rendering data
Target 3D rendering data;To complete 3D registration according to the transformation matrix solved, registration precision can be improved, so that registration
As a result more smooth, registering result and target data are more identical.
The embodiment that the invention will now be described in detail with reference to the accompanying drawings.
In the embodiment of the present invention, before being registered to 3D rendering data, 3D rendering data can be carried out in advance crucial
Point mark.Specifically, the method flow that 3D rendering data are carried out with key point mark, as shown in Figure 1, including the following steps S101
To S103:
S101: 3D rendering data are received, 3D rendering data include source 3D rendering data and target 3D rendering data.
Wherein, the format of received 3D rendering data can be a cloud, depth map, 3D grid etc..
Preferably, after receiving 3D rendering data, received 3D rendering data can be pre-processed, such as cutting, noise are gone
It removes, filling-up hole and smooth etc..Wherein, pretreatment can be realized by existing algorithm, as (Point cloud library puts cloud to PCL
Library) algorithm that includes.
S102: key point mark is carried out to received 3D rendering data.
Key point mark namely specifies some points with obvious characteristic or meaning in received 3D rendering data,
And its position is marked out to come;Referring to fig. 2, the schematic diagram of the key point marked in 3D rendering data is shown.
Wherein, when carrying out key point mark to received 3D rendering data, program can be used and detect key point automatically, or
Person manually adjusts testing result and is allowed to more accurate, can also manually carry out key point mark completely after automatic detection.
By taking face image data as an example, if to be needed first using the scheme of automatic detection key point by 3D rendering data
Projection becomes 2D image, then 2D critical point detection program is called to be detected, then back projection obtains corresponding 3D key point
It sets.
It further, can also be according to following steps S103, to the pass of mark after being labeled to the key point of 3D rendering data
Key carries out position optimization.
S103: position optimization is carried out to the key point marked in target 3D rendering data.
Preferably, carrying out position optimization to the key point marked in target 3D rendering data can be in the following way:
Feature extraction is carried out to the key point of source 3D rendering data and target 3D rendering data;It is right according to the feature of extraction
The key point marked in target 3D rendering data carries out position optimization.Optimizing to the position of key point can be improved key point
Precision, and then be conducive to improve registration precision.
Wherein, the feature of extraction includes but is not limited to following at least one feature: color characteristic and geometrical characteristic.Color is special
Sign may include LBP, SIFT, SURF etc., and geometrical characteristic may include NARF, VPH, FPFH, SURE etc., these color characteristics
It is well known to the skilled person with geometrical characteristic, may be incorporated for key point position optimization.
Wherein, according to the feature of extraction, one kind of position optimization is carried out to the key point marked in target 3D rendering data
Preferred implementation is as follows:
In the crucial neighborhood of a point of target 3D rendering data, the feature between key point corresponding with source 3D rendering data is determined
Apart from nearest point;Using the position for the point determined as the position of the key point of the target data after optimization.
Such as formula 1, in i-th of key point of target 3D rendering dataNeighborhoodInterior search one is schemed with source 3D
As data correspond to key pointBetween the nearest point u of characteristic distance, using the position of the point as target 3D rendering data critical point
PositionCome the position of optimization aim 3D rendering data critical point with this.Wherein f () is the function of feature extraction.
(formula 1)
In formula 1,Indicate point u and key pointBetween characteristic distance.
In fact, before being registered to 3D rendering data, it usually needs to source 3D rendering data and target 3D rendering
The corresponding points of data are estimated, it is, finding from target 3D rendering data for the vertex of any source 3D rendering data
Point corresponding with the vertex.
It specifically, can at least one of according to the following method, to pair of source 3D rendering data and target 3D rendering data
It should put and be estimated: face the point estimation method, normal line vector gunnery technique, back projection method recently.It is appreciated that carrying out corresponding points
The method of estimation is not limited to the above-mentioned method referred to.
Further, the corresponding points of source 3D rendering data and target 3D rendering data can also be screened.Specifically, may be used
According at least one following screening principle, the corresponding points of source 3D rendering data and target 3D rendering data are screened:
A. the corresponding points for being greater than set distance value to the distance corresponding points are refused;
B. the corresponding points for being greater than set angle angle value to the angle the normal direction of corresponding points are refused;
C. the corresponding points in target 3D rendering data being boundary point are refused.
It is appreciated that the screening principle for carrying out corresponding points screening is also not necessarily limited to the above-mentioned screening principle referred to.
By carrying out corresponding points screening, the not high corresponding points of some accuracys can be refused, to improve the correspondence of estimation
The accuracy of point, and then help to improve the registration precision of 3D registration.
It wherein, i.e., can be first to mesh when carrying out corresponding points screening using screening principle c when screening principle using the third
It marks 3D rendering data and carries out boundary point estimation.Wherein, judge whether a point is that the method for boundary point can be with are as follows: calculate the point
The point and its neighborhood point are carried out 2D projection along normal direction, are judged by the position distribution to 2D subpoint by normal direction
Whether the point is boundary point.Preferably, the position distribution of 2D subpoint is judged the point whether be boundary point mode are as follows: with
Centered on the point, within the scope of one 360 degree of circumference, angular range shared by the region without the projection of neighborhood point is if more than setting
Threshold value, it is determined that the point is boundary point, and otherwise the point is not boundary point.
It is carried out pair since for 3D rendering data, boundary point is often unstable, and has noise, therefore using screening principle c
After screening should being put, it can remove because the unstable bring of boundary point influences, and then also contribute to improving the registration precision of 3D registration.
Key point based on mark, the register method of 3D rendering data provided in an embodiment of the present invention are as follows: by source 3D rendering
Data are based on key point and target 3D rendering data carry out rigid alignment, obtain the first transformation data;Data base is converted by first
In key point and target 3D rendering data carry out it is non-rigid be aligned, obtain the second transformation data;Iterative calculation the second transformation data
To the transformation matrix of target 3D rendering data;Source 3D rendering data are completed according to the transformation matrix iterated to calculate out to scheme to target 3D
As the 3D of data is registered.
The detailed process of the register method of 3D rendering data provided in an embodiment of the present invention, as shown in figure 3, including following step
It is rapid:
S301: source 3D rendering data are based on key point and target 3D rendering data carry out rigid alignment, obtain the first change
Change data.
Specifically, existing ICP (Iterative Closest Point, iteration closest approach) method can be used, to mark
The source 3D rendering data and target 3D rendering data for having infused key point carry out rigid alignment, and obtain the figure of the source 3D after rigid alignment
As data, i.e., the first transformation data.
First transformation data: being based on that the progress of key point and target 3D rendering data is non-rigid to be aligned by S302, obtains second
Convert data.
In this step, the first transformation data are based on to the progress of key point and target 3D rendering data is non-rigid to be aligned, specifically
It can be with are as follows: be based on the first constraint condition, the first transformation data and target 3D rendering data are aligned;Wherein, the first constraint
Condition includes: key point constraint and smoothness constraint;Wherein, the first transformation data refer to the source 3D rendering number after rigid alignment
According to.
Wherein, the first constraint condition is specifically as follows the product of smoothness constraint function Yu default weight matrix, with key
The sum of point constraint function minimum;Wherein, presetting weight matrix is diagonal matrix, by the confidence level structure of the key point of source 3D rendering data
At.
In fact, being based on the first constraint condition, the first transformation data and target 3D rendering data are aligned, also
It is that smoothness constraint function E is based in minimization following formula 2sWith key point constraint function ElEnergy function E1(X), in the hope of
First transformation matrix of the first transformation data to target 3D rendering data.
(formula 2)
In formula 2, X indicates [X1…Xn]T, wherein XiIt is 3 × 4 matrix, corresponds to i-th of top of source 3D rendering data
The transformation matrix of point, n is number of vertices;It is default weight matrix, andFor diagonal matrix, by the key point of source 3D rendering data
Confidence level constitute;Wherein, a kind of method of the confidence level for the key point obtaining source 3D rendering data will be in following detailed description.
Specifically, smoothness constraint function EsAre as follows:
(formula 3)
Such as the smoothness constraint function of formula 3, it is desirable that the vertex i and j that every a pair has side connected in the 3D rendering data of source, they
Corresponding transformation matrix XiAnd XjBetween F norm it is small as far as possible;In formula 3, ε indicates the set on side.
Key point constraint function ElAre as follows:
(formula 4)
Such as the key point constraint function of formula 4, it is desirable that viBy XiIt can be more closer better with l after transformation.In formula 4, l is source 3D
The key point of image data;viIt is key point corresponding with l in target 3D rendering data;It is crucial
The set of point pair.
Weight matrixAre as follows:
(formula 5)
In formula 5, L is the number of key point, and θ is the weight of key point constraint, β be the key point of source 3D rendering data can
Reliability.
Wherein, a kind of calculation method of the confidence level of the key point of source 3D rendering data is as follows:
Characteristic distance between calculating source 3D rendering data and corresponding key point in target 3D rendering data;Using default
Monotonic decreasing function acts on this feature distance, obtains the confidence level of the key point of source 3D rendering data.
For example, the confidence level β of i-th of key point of source 3D rendering data can be obtained according to such as following formula 6i:
βi=g (di) (formula 6)
In formula 6, di=| | f (vj)-f(li) | |, indicate key point vjWith liBetween characteristic distance;liIt is source 3D rendering data
On i-th of key point;V=[x, y, z, 1]TIt is the vertex of target 3D rendering data, vjBe in target 3D rendering data with liIt is right
The key point answered, the key point are j-th of vertex in target 3D rendering data;G is a monotonic decreasing (or monotone decreasing)
Function, for example, g can be reciprocal function etc.;The meaning of formula 6 is that the confidence level of key point is between corresponding key point
Characteristic distance become larger and become smaller.
S303: the transformation matrix of the second transformation data of iterative calculation to target 3D rendering data.
This step S303 uses the second transformation data of iterative algorithm calculating to the transformation matrix of target 3D rendering data, wherein
There are many modes of iterative calculation, and a kind of preferred embodiment is as follows:
In each iterative process, the transformation data based on the second constraint condition calculating last iteration to target 3D rendering number
According to transformation matrix;When each iteration is completed, according to transformation matrix obtained in this and last iteration process, judge to receive
Hold back whether condition meets;If so, the transformation matrix that will be calculated during current iteration, as the final transformation square solved
Battle array;If it is not, continuing next iteration.
Wherein, for the first time iteration when, using second transformation data as the transformation data of last iteration;It, will when non-iteration for the first time
Transformation matrix transformed image data of the source 3D rendering data through being calculated during last iteration, as last iteration
Convert data.
Wherein, the second constraint condition include: key point constraint and smoothness constraint and last iteration transformation data and
Distance restraint between the corresponding points of target 3D rendering data.
Preferably, the second constraint condition can be with are as follows: the product of smoothness constraint function and default smoothness constraint weight, institute
It is minimum to state the sum of distance restraint function and key point constraint function and the product three of default key point constraint weight.
It is, solving the transformation square for making energy function E (X) minimization as shown in following formula 7 in each iterative process
Battle array:
E (X) :=Ed(X)+αtEs(X)+BEl(X) (formula 7)
In formula 7, EsIt, can be as shown in above-mentioned formula 3 for smoothness constraint function;ElIt, can be such as above-mentioned formula for key point constraint function
Shown in 4;EdFor the distance restraint function between the transformation data of last iteration and the corresponding points of target 3D rendering data;αtIt is smooth
Property constraint function EsWeight, B is the weight of key point constraint function.Wherein, αt, B can rule of thumb set by technical staff.
The above-mentioned condition of convergence, which specifically may is that in adjacent iterative process twice, obtains the Euclidean distance between transformation matrix
Less than default iteration ends threshold value.For example, the transformation matrix obtained in this and last iteration process can be iterated to calculate
Meet such as following formula 8:
||Xj-Xj-1| | < δ (formula 8)
In formula 8, Xj-1For transformation matrix obtained in last iteration process;XjTo be converted obtained in current iteration process
Matrix;δ is default iteration ends threshold value.
In fact, effectively to prevent from falling into locally optimal solution, it can be for diminishing αt∈{α1..., αm, it is based on
The energy function of above-mentioned formula 7, iterative calculation transformation matrix obtained in this and last iteration process meet convergence item
Part.Wherein, α1,...,αmFor technical staff rule of thumb pre-set m smoothness constraint function EsWeight, and αt>
αt+1。
Wherein, the distance restraint function E between the transformation data of last iteration and the corresponding points of target 3D rendering datadAre as follows:
(formula 9)
In formula 9,For the transformation matrix on i-th of vertex of source 3D rendering data in iteration j;T is target 3D rendering
Data;Dist is the vertex v of source 3D rendering dataiIn Current TransformUnder position and its in target 3D rendering data
Distance between corresponding points;wiIt is two value parameters, whenWhen having corresponding points in target 3D rendering data, wiIt is 1, does not have
When acceptable corresponding points, wiIt is 0;It is the vertex set of source 3D rendering data.
In practical applications, the key point in the second constraint condition is constrained to the constraint of key point single-point.Based on key point
The constraint of single-point, coverage is small, in the case where source 3D rendering data differ biggish situation with target 3D rendering data, key point list
Point is not enough to drive the point of surrounding to generate reasonably similar transformation, it may appear that key point and the very big feelings of surrounding point transformation difference
Condition.
For the defect for overcoming above-mentioned key point single-point constraint, a kind of crucial vertex neighborhood is increased in technical solution of the present invention
The constraint of point further improves registration by combining the constraint of key point single-point with the constraint of crucial vertex neighborhood point
Precision, and make registering result more smooth, registering result and target data are more identical.
It therefore, further include crucial vertex neighborhood inequality constraints in technical solution of the present invention, in the second constraint condition.Key point
Neighborhood inequality constraints specifically: the distance of point to the key point in crucial vertex neighborhood is remoter, constrains smaller.
Formula 10 shows a kind of preferred key point inequality constraints.As a result, in each iterative process, about based on second
Beam condition calculates the transformation data of last iteration to the transformation matrix of target 3D rendering data, it is, meeting following formula 10
Shown under conditions of key vertex neighborhood inequality, solve the transformation matrix of energy function E (X) minimization shown in formula 7:
(formula 10)
In formula 10,Indicate the e of the transformation matrix on u-th of vertex of the calculated source 3D rendering data of current iteration
A element value;M-th of key point of the expression calculated source 3D rendering data of current iteration, i.e., i-thmA vertex, change
Change e-th of element of matrix;Neighborhood;It is m-th of key point in the 3D rendering data of source, i.e., i-thmA vertex;It is with vuWithBetween distance become function that is remote and becoming larger.
To the transformation matrix for meeting the condition of convergence that will can finally iterate to calculate out, as the final transformation solved
Matrix.
The pact for passing through one key vertex neighborhood point of increase for the ineffective problem of key point single-point constraint, the present invention
Beam, it is not only explicit to increase constraint of the key point to neighborhood point, but also by by the constraint of key point single-point and neighborhood point
Range constraint combine, expand the coverage of key point, thus preferably drive key point around point, make around
Point can follow key point transformation and rationally effective transformation.
Moreover, the constraint modeling of crucial vertex neighborhood point is become one group of key vertex neighborhood inequality constraints, this group by the present invention
Power and distance dependent of the neighborhood point apart from key point of constraint, distance is remoter, constrains more loose (as shown in Equation 10).It is such to build
The constraint of crucial vertex neighborhood is added in energy function E (X) more rationally by mould mode ratio.It is added in energy function E (X), phase
When in establishing one group of equality constraint, it is desirable that the transformation of neighborhood point is all identical as key point, is unreasonable in this way.In fact,
The neighborhood point remoter apart from key point, transformation and the dependence of crucial point transformation are weaker, therefore, use pass of the invention
The inequality constraints of key vertex neighborhood exactly meets such demand, further improves registration precision, and make registering result
More smooth, registering result and target data are more identical.
In fact, in each iterative process, the transformation data based on the second constraint condition calculating last iteration to target
Before the transformation matrix of 3D rendering data, can the corresponding points of transformation data and target 3D rendering data to last iteration estimate
Meter and screening.It should be noted that above-mentioned carry out corresponding point estimation and correspondence with target 3D rendering data to source 3D rendering data
The method of point screening, the corresponding point estimation being also applied between the first transformation data and target 3D rendering data and corresponding points sieve
Choosing, the corresponding point estimation being also applied between the transformation data in each iterative process and target 3D rendering data and corresponding points sieve
Choosing.
It more preferably, can be with root after iterating to calculate out transformation matrix of the second transformation data to target 3D rendering data
The transformation matrix iterated to calculate out is smoothed according to following steps S304:
S304: it is based on third constraint condition, the transformation matrix iterated to calculate out is smoothed.
Wherein, third constraint condition includes: pair of smoothness constraint and third transformation data and target 3D rendering data
Distance restraint between should putting;Third transformation data refer to the transformed source 3D rendering number of the transformation matrix through iterating to calculate out
According to.
Specifically, third constraint condition specifically: the product of the weight coefficient of smoothness constraint function and smoothness constraint,
And the distance restraint function sum of the two between third transformation data and the corresponding points of target 3D rendering data is minimum.It is,
Make the energy function E as shown in following formula 112(X) minimization:
(formula 11)
In formula 11,It is the weight coefficient of smoothness constraint;EsIt, can be as shown in above-mentioned formula 3 for smoothness constraint function;Ed
The distance restraint function between data and the corresponding points of target 3D rendering data is converted for third, it can be as shown in above-mentioned formula 9.
S305: 3D rendering data in source are completed according to the transformation matrix solved and are registered to the 3D of target 3D rendering data.
Specifically, 3D rendering data in source directly can be completed to target 3D rendering number according to the transformation matrix iterated to calculate out
According to 3D registration.More preferably, 3D rendering data in source can also be completed according to the transformation matrix after smoothing processing to target 3D rendering number
According to 3D registration.It is infused in fact, completing source 3D rendering data according to the transformation matrix solved to the 3D of target 3D rendering data
Volume, that is, source 3D rendering data are acted on using the transformation matrix solved, realize source 3D rendering data and target 3D rendering
The alignment of data.
Illustrate present invention technical effect achieved below by a simple comparison.
The above method is used for the registration of 3D human face data, source 3D human face data to be the template of a 3D face, target 3D
Human face data is the 3D point cloud data obtained with 3D scanner.Fig. 4 a shows the curved surface picture of source 3D human face data, and Fig. 4 b shows
The texture picture of target 3D human face data is gone out, Fig. 4 c shows the texture-free curved surface picture of target 3D human face data.
Distinguished using register method of the invention and existing Optimal Step non-rigid ICP (OSICP) method
Above-mentioned 3D human face data is registered, registering result is as shown in Fig. 5 a to 4d.
The registering result of the two is compared, it can be seen that OSICP method, it is attached in the key point of eyes, nose, mouth
Closely, there is distortion, and method of the invention can then obtain better effect.
In addition, it is following from flatness and with the degree of agreement of target data, come the method more of the invention that quantifies and
OSICP method.
Flatness: the displacement difference of each point and its neighborhood point is calculated, averages, it is poor to be denoted as single-point average displacement;It is right
The single-point average displacement difference of all the points seeks an average value.This total average value is just used as the quantized value of flatness, this value
Smaller, then registering result is more smooth, illustrates that the Lubricity of register method is better.
With the identical property of target data: finding corresponding points in target data;It calculates between each point and corresponding points
Distance;One average value is asked to the distance of all the points, using this value as the quantized value for property of coincideing, this value is smaller, illustrates to tie
It is better that fruit coincide with target data.
The following table 1 shows the quantized result of method of the invention and OSICP method, it can be seen that no matter flatness also
It is in the identical property with target data, method of the invention all has preferable registration quality and effect.
Table 1
Method | The quantized value of flatness | The quantized value of the identical property of data |
OSICP method | 0.3361 | 0.6486 |
Method of the invention | 0.3152 | 0.6096 |
To sum up, either from visual effect or on the registration quality versus of quantization, register method of the invention has
There is preferable effect.
Based on the register method of above-mentioned 3D rendering data, the register device of 3D rendering data provided in an embodiment of the present invention
Internal structure block diagram, as shown in Figure 6, comprising: rigid alignment unit 601, non-rigid alignment unit 602 and transformation matrix calculate
Unit 603.
Rigid alignment unit 601 is used to for source 3D rendering data being based on key point and target 3D rendering data progress rigidity is right
Together, the first transformation data are obtained;Export iteration notice and the second transformation data.
After non-rigid alignment unit 602 is used to receive the iteration notice of the output of rigid alignment unit 601, by rigid alignment
The first transformation data that unit 601 exports are aligned based on the progress of key point and target 3D rendering data is non-rigid, obtain the second change
It changes data and exports.
Transformation matrix computing unit 603 is used to iterate to calculate the second transformation data to the transformation square of target 3D rendering data
Battle array;And 3D rendering data in source are completed according to calculated transformation matrix and are registered to the 3D of target 3D rendering data.
More preferably, non-rigid alignment unit 602 is specifically used for being based on the first constraint condition, and rigid alignment unit 601 is defeated
The first transformation data and target 3D rendering data out are aligned;Wherein, the first constraint condition includes: key point constraint peace
Slip constraint.Preferably, the first constraint condition specifically: the product of smoothness constraint function and default weight matrix, with key
The sum of point constraint function minimum;Wherein, presetting weight matrix is diagonal matrix, by the confidence level structure of the key point of source 3D rendering data
At.
Correspondingly, above-mentioned register device may also include that key point confidence level computing unit 604.
Key point confidence level computing unit 604 is corresponding with target 3D rendering data for calculating source 3D rendering data
Characteristic distance between key point;This feature distance is acted on using default monotonic decreasing function, obtains the pass of source 3D rendering data
The confidence level of key point.
More preferably, above-mentioned register device may also include that smoothing processing unit 605.
The transformation matrix that smoothing processing unit 605 is used to iterate to calculate out transformation matrix computing unit 603 carries out smooth
Processing.Specifically, smoothing processing unit 605 is used to be based on third constraint condition, iterates to calculate to transformation matrix computing unit 603
Transformation matrix out is smoothed;Wherein, third constraint condition include: smoothness constraint and third transformation data and
Distance restraint between the corresponding points of target 3D rendering data;Third transformation data refer to that source 3D rendering data are iterated to calculate out
The transformed source 3D rendering data of transformation matrix.
More preferably, above-mentioned register device may also include that key point position optimization unit 606.
Key point position optimization unit 606 is used to carry out position optimization to the key point marked in target 3D rendering data.
As shown in fig. 7, key point position optimization unit 606 is specific can include: feature extraction subelement 701 and position optimization
Subelement 702.
Feature extraction subelement 701 is used to carry out feature to the key point of source 3D rendering data and target 3D rendering data to mention
It takes;Wherein, the feature of extraction includes following at least one feature: color characteristic and geometrical characteristic.
Position optimization subelement 702 is used for the feature extracted according to feature extraction subelement 701, to target 3D rendering data
The key point of middle mark carries out position optimization.Specifically, position optimization subelement 512 is used for the key in target 3D rendering data
In neighborhood of a point, the nearest point of characteristic distance between corresponding with source 3D rendering data key point is determined;By the position for the point determined
The position of the key point of target data after setting as optimization.
As shown in figure 8, transformation matrix computing unit 603 can specifically include: iteration subelement 801 and termination judgement are single
Member 802.
After iteration subelement 801 is for receiving iteration notice, an iteration calculating is carried out: based on the second constraint condition
The transformation data of last iteration are calculated to the transformation matrix of target 3D rendering data, and send single iteration completion notice;Wherein,
Two constraint conditions include: the transformation data and target 3D rendering data of key point constraint and smoothness constraint and last iteration
Corresponding points between distance restraint.Wherein, for the first time iteration when, using second transformation data as the transformation data of last iteration;It is non-
For the first time when iteration, the transformed image data of transformation matrix that source 3D rendering data are calculated in the process through last iteration,
Transformation data as last iteration.
It terminates judgment sub-unit 802 to be used for when receiving the single iteration completion notice of the transmission of iteration subelement 801, root
According to transformation matrix obtained in this and last iteration process, judge whether the condition of convergence meets;If so, by current iteration mistake
The transformation matrix being calculated in journey, as the final transformation matrix solved;If it is not, sending iteration to iteration subelement 801
Notice.
Preferably, the second constraint condition specifically: the product of smoothness constraint function and default smoothness constraint weight, institute
It is minimum to state the sum of distance restraint function and key point constraint function and the product three of default key point constraint weight.
Further, the second constraint condition may also include that crucial vertex neighborhood inequality constraints.Preferably, crucial vertex neighborhood is not
Equality constraint is specifically as follows: distance of the point away from the key point in crucial vertex neighborhood is remoter, constrains smaller.
More preferably, after iteration subelement 801 receives iteration notice, to the transformation data and target 3D rendering of last iteration
After the corresponding points of data are screened, the transformation data based on the second constraint condition calculating last iteration to target 3D rendering data
Transformation matrix, and send single iteration completion notice.
Wherein, screening principle include but is not limited to it is following at least one:
The corresponding points for being greater than set distance value to the distance corresponding points are refused;
The corresponding points for being greater than set angle angle value to the angle the normal direction of corresponding points are refused;
The corresponding points in target 3D rendering data being boundary point are refused.
Further, iteration subelement 801 can also be in pair of transformation data and target 3D rendering data to last iteration
It should put before being screened, according to following at least one method, the corresponding points of transformation data and target 3D rendering data are estimated
Meter: face the point estimation method, normal line vector gunnery technique, back projection method recently.
The function and its implementation of each unit and subelement in register device of the invention, can refer to above-mentioned registration side
Content described in each step in method, details are not described herein again.
In conclusion in technical solution of the present invention, when carrying out the registration of 3D rendering data, to source 3D rendering data and
After target 3D rendering data are based on key point progress rigid alignment, then non-rigid alignment is carried out, so that source 3D rendering data and mesh
The key point for marking 3D rendering data corresponds to well, and obtains a preferable initial transformation;Later, preferable based on what is obtained
Initial transformation, then row iteration calculating solves transformation matrix, completes source 3D rendering number according to the transformation matrix that iterates to calculate out
It is registered according to the 3D to target 3D rendering data.To which the transformation matrix that method of the invention obtains can be by source 3D rendering data
More proper, the smooth, goodness of fit higher corresponds to target 3D rendering data, to improve registration essence when 3D rendering registration
Degree, so that registering result is more smooth, registering result and target data are more identical.
Further, in technical solution of the present invention, position can also be carried out to the key point marked in target 3D rendering data
Optimization is set, the precision of key point is improved, and then is conducive to improve registration precision;Moreover, the present invention solves transformation in iterative calculation
When matrix, on the basis of key point single-point constraint, the inequality constraints of key point field is increased, so that remoter apart from key point
Neighborhood point, transformation and the dependence of crucial point transformation are weaker, so that registration precision is further improved, so that registering
As a result more smooth, registering result and target data are more identical.
Those skilled in the art of the present technique are appreciated that the present invention includes being related to for executing in operation described herein
One or more equipment.These equipment can specially design and manufacture for required purpose, or also may include general
Known device in computer.These equipment have the computer program being stored in it, these computer programs are selectively
Activation or reconstruct.Such computer program can be stored in equipment (for example, computer) readable medium or be stored in
It e-command and is coupled in any kind of medium of bus respectively suitable for storage, the computer-readable medium includes but not
Be limited to any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, only
Read memory), RAM (Random Access Memory, immediately memory), EPROM (Erasable Programmable
Read-Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable
Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory, magnetic card or light card
Piece.It is, readable medium includes by equipment (for example, computer) with any Jie for the form storage or transmission information that can be read
Matter.
Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology neck
Field technique personnel be appreciated that these computer program instructions can be supplied to general purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, to pass through the processing of computer or other programmable data processing methods
The scheme specified in frame or multiple frames of the device to execute structure chart and/or block diagram and/or flow graph disclosed by the invention.
Those skilled in the art of the present technique have been appreciated that in the present invention the various operations crossed by discussion, method, in process
Steps, measures, and schemes can be replaced, changed, combined or be deleted.Further, each with having been crossed by discussion in the present invention
Kind of operation, method, other steps, measures, and schemes in process may also be alternated, changed, rearranged, decomposed, combined or deleted.
Further, in the prior art to have and the step in various operations, method disclosed in the present invention, process, measure, scheme
It may also be alternated, changed, rearranged, decomposed, combined or deleted.
The above is only some embodiments of the invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (30)
1. a kind of register method of 3D rendering data characterized by comprising
Source 3D rendering data are based on key point and target 3D rendering data carry out rigid alignment, obtain the first transformation data, the
One transformation data are the source 3D rendering data after rigid alignment;
First transformation data are based on to the progress of key point and target 3D rendering data is non-rigid to be aligned, obtain the second transformation data;
Transformation matrix of the second transformation data of iterative calculation to target 3D rendering data;
Source 3D rendering data are completed according to the transformation matrix to register to the 3D of target 3D rendering data.
2. the method as described in claim 1, which is characterized in that described that first transformation data are based on key point and target 3D figure
As data carry out non-rigid alignment, comprising:
Based on the first constraint condition, the first transformation data and target 3D rendering data are aligned;
Wherein, the first constraint condition includes: key point constraint and smoothness constraint.
3. method according to claim 2, which is characterized in that the first constraint condition includes:
The product of the function of the smoothness constraint and default weight matrix, it is minimum with the sum of the function of key point constraint;
Wherein, presetting weight matrix is diagonal matrix, is made of the confidence level of the key point of source 3D rendering data.
4. method as claimed in claim 3, which is characterized in that the calculating of the confidence level of the key point of the source 3D rendering data
Method includes:
Characteristic distance between calculating source 3D rendering data and corresponding key point in target 3D rendering data;
This feature distance is acted on using default monotonic decreasing function, obtains the confidence level of the key point of source 3D rendering data.
5. the method as described in claim 1, which is characterized in that the second transformation data of iterative calculation to target 3D rendering number
According to transformation matrix, comprising:
In each iterative process, the transformation data based on the second constraint condition calculating last iteration to target 3D rendering data
Transformation matrix;
When each iteration is completed, according to transformation matrix obtained in this and last iteration process, judge that the condition of convergence is
No satisfaction;
If so, the transformation matrix that will be calculated during current iteration, as the final transformation matrix solved;If it is not, after
Continuous iteration next time;
Wherein, the second constraint condition includes: the transformation data and target of key point constraint and smoothness constraint and last iteration
Distance restraint between the corresponding points of 3D rendering data;
Wherein, for the first time iteration when, using second transformation data as the transformation data of the last iteration.
6. method as claimed in claim 5, which is characterized in that the second constraint condition includes:
The product of the function of the smoothness constraint and default smoothness constraint weight, the distance restraint function and described
The sum of the function of key point constraint and the product three of default key point constraint weight are minimum.
7. method as claimed in claim 5, which is characterized in that the second constraint condition further include:
Under conditions of crucial vertex neighborhood inequality constraints, the function of the smoothness constraint and default smoothness constraint weight
The product three of product, the function of the distance restraint function and key point constraint and default key point constraint weight
The sum of minimum.
8. the method for claim 7, which is characterized in that it is described key vertex neighborhood inequality constraints include:
The distance of point to the key point in crucial vertex neighborhood is remoter, constrains smaller.
9. method a method as claimed in any one of claims 1-8, which is characterized in that in the second transformation data of iterative calculation to target
After the transformation matrix of 3D rendering data, further includes:
The transformation matrix iterated to calculate out is smoothed.
10. method as claimed in claim 9, which is characterized in that the described pair of transformation matrix iterated to calculate out is smoothly located
Reason, comprising:
Based on third constraint condition, the transformation matrix iterated to calculate out is smoothed;
Wherein, third constraint condition includes: the corresponding points of smoothness constraint and third transformation data and target 3D rendering data
Between distance restraint;
Wherein, third transformation data are transformation matrix transformed image data of the source 3D rendering data through iterating to calculate out.
11. method a method as claimed in any one of claims 1-8, which is characterized in that before the progress rigid alignment, further includes:
Position optimization is carried out to the key point marked in target 3D rendering data.
12. method as claimed in claim 11, which is characterized in that described to the key marked in the target 3D rendering data
Point carries out position optimization, comprising:
Feature extraction is carried out to the key point of source 3D rendering data and target 3D rendering data;
According to the feature of extraction, position optimization is carried out to the key point marked in target 3D rendering data;
Wherein, the feature of extraction includes following at least one feature: color characteristic and geometrical characteristic.
13. method as claimed in claim 12, which is characterized in that described to be clicked through to the key marked in target 3D rendering data
Row position optimization, comprising:
In the crucial neighborhood of a point of target 3D rendering data, the characteristic distance between key point corresponding with source 3D rendering data is determined
Nearest point;
Using the position for the point determined as the position of the key point of the target data after optimization.
14. method as claimed in claim 5, which is characterized in that calculate last iteration based on the second constraint condition described
Before converting data to the transformation matrix of target 3D rendering data, further includes:
The corresponding points of transformation data and target 3D rendering data to last iteration are screened.
15. method as claimed in claim 14, which is characterized in that the transformation data and target 3D rendering to last iteration
The corresponding points of data are screened, comprising:
According at least one following screening principle, the corresponding points of transformation data and target 3D rendering data to last iteration are carried out
Screening:
The corresponding points for being greater than set distance value to the distance corresponding points are refused;
The corresponding points for being greater than set angle angle value to the angle the normal direction of corresponding points are refused;
The corresponding points in target 3D rendering data being boundary point are refused.
16. method as claimed in claim 15, which is characterized in that scheme in the transformation data to last iteration and target 3D
Before being screened as the corresponding points of data, further includes:
According to following at least one method, the transformation data and the corresponding points of target 3D rendering data of last iteration are estimated
Meter:
Face the point estimation method, normal line vector gunnery technique, back projection method recently.
17. a kind of register device of 3D rendering data characterized by comprising
Rigid alignment unit is obtained for source 3D rendering data to be based on key point and target 3D rendering data progress rigid alignment
To the first transformation data, the first transformation data are the source 3D rendering data after rigid alignment;
Non-rigid alignment unit is non-rigid right for carrying out the first transformation data based on key point and target 3D rendering data
Together, the second transformation data are obtained;Export iteration notice and the second transformation data;
Transformation matrix computing unit, after receiving the iteration notice, the second transformation data of iterative calculation are schemed to target 3D
As the transformation matrix of data;And 3D rendering data in source are completed according to calculated transformation matrix and are infused to the 3D of target 3D rendering data
Volume.
18. device as claimed in claim 17, which is characterized in that
The non-rigid alignment unit is specifically used for being based on the first constraint condition, and the first of rigid alignment unit output is become
It changes data and target 3D rendering data is aligned;Wherein, the first constraint condition includes: key point constraint and smoothness constraint.
19. device as claimed in claim 18, which is characterized in that the first constraint condition includes:
The product of the function of the smoothness constraint and default weight matrix, it is minimum with the sum of the function of key point constraint;
Wherein, presetting weight matrix is diagonal matrix, is made of the confidence level of the key point of source 3D rendering data.
20. device as claimed in claim 19, which is characterized in that further include:
Key point confidence level computing unit, for calculating source 3D rendering data and corresponding key point in target 3D rendering data
Between characteristic distance;This feature distance is acted on using default monotonic decreasing function, obtains the key point of source 3D rendering data
Confidence level.
21. device as claimed in claim 17, which is characterized in that the transformation matrix computing unit includes:
Iteration subelement carries out an iteration calculating after receiving iteration notice: calculating last time based on the second constraint condition
Transformation matrix of the transformation data of iteration to target 3D rendering data;Single iteration completion notice is sent later;Wherein, second about
Beam condition includes: pair of the transformation data and target 3D rendering data of key point constraint and smoothness constraint and last iteration
Distance restraint between should putting;For the first time when iteration, using the second transformation data as the transformation data of the last iteration;
Judgment sub-unit is terminated, for when receiving single iteration completion notice, during this and last iteration
Obtained transformation matrix, judges whether the condition of convergence meets;If so, the transformation matrix that will be calculated during current iteration,
As the final transformation matrix solved;If it is not, sending iteration notice to the iteration subelement.
22. device as claimed in claim 21, which is characterized in that the second constraint condition includes:
The product of the function of the smoothness constraint and default smoothness constraint weight, the distance restraint function and described
The sum of the function of key point constraint and the product three of default key point constraint weight are minimum.
23. device as claimed in claim 21, which is characterized in that the second constraint condition further include: in crucial vertex neighborhood etc.
Under conditions of formula constraint, the product of the function of the smoothness constraint and default smoothness constraint weight, the distance restraint letter
The sum of the function of the several and described key point constraint and the product three of default key point constraint weight are minimum.
24. device as claimed in claim 23, which is characterized in that the key vertex neighborhood inequality constraints specifically: crucial
Distance of the point away from the key point in vertex neighborhood is remoter, constrains smaller.
25. the device as described in claim 17-24 is any, which is characterized in that further include:
Smoothing processing unit, the transformation matrix for iterating to calculate out to the transformation matrix computing unit are smoothed.
26. device as claimed in claim 25, which is characterized in that
The smoothing processing unit is specifically used for being based on third constraint condition, iterates to calculate out to the transformation matrix computing unit
Transformation matrix be smoothed;
Wherein, third constraint condition includes: the corresponding points of smoothness constraint and third transformation data and target 3D rendering data
Between distance restraint;
Wherein, third transformation data refer to the transformed source 3D rendering of transformation matrix of the source 3D rendering data through iterating to calculate out
Data.
27. the device as described in claim 17-24 is any, which is characterized in that further include:
Key point position optimization unit, for carrying out position optimization to the key point marked in target 3D rendering data.
28. device as claimed in claim 27, which is characterized in that the key point position optimization unit specifically includes:
Feature extraction subelement carries out feature extraction for the key point to source 3D rendering data and target 3D rendering data;Its
In, the feature of extraction includes following at least one feature: color characteristic and geometrical characteristic;
Position optimization subelement, the feature for being extracted according to the feature extraction subelement get the bid to target 3D rendering data
The key point of note carries out position optimization.
29. device as claimed in claim 28, which is characterized in that
The position optimization subelement is specifically used in the crucial neighborhood of a point of target 3D rendering data, determining and source 3D rendering
Data correspond to the nearest point of the characteristic distance between key point;Using the position for the point determined as the pass of the target data after optimization
The position of key point.
30. a kind of electronic equipment characterized by comprising processor and memory,
The memory has the computer program being stored in it,
The computer program is provided to the processor, is realized with being executed by the processor such as claim 1-16
Described in any item methods.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510019073.XA CN105844582B (en) | 2015-01-15 | 2015-01-15 | The register method and device of 3D rendering data |
KR1020150178967A KR102220099B1 (en) | 2015-01-15 | 2015-12-15 | Registration method and apparatus for 3D visual data |
US14/994,688 US10360469B2 (en) | 2015-01-15 | 2016-01-13 | Registration method and apparatus for 3D image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510019073.XA CN105844582B (en) | 2015-01-15 | 2015-01-15 | The register method and device of 3D rendering data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105844582A CN105844582A (en) | 2016-08-10 |
CN105844582B true CN105844582B (en) | 2019-08-20 |
Family
ID=56580013
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510019073.XA Active CN105844582B (en) | 2015-01-15 | 2015-01-15 | The register method and device of 3D rendering data |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102220099B1 (en) |
CN (1) | CN105844582B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102633159B1 (en) * | 2016-12-13 | 2024-02-05 | 한국전자통신연구원 | Apparatus and method for restoring 3d-model using the image-processing |
US10424045B2 (en) * | 2017-06-21 | 2019-09-24 | International Business Machines Corporation | Machine learning model for automatic image registration quality assessment and correction |
KR102195168B1 (en) * | 2017-11-21 | 2020-12-24 | 한국전자통신연구원 | 3d reconstruction terrain matching method of and apparatus thereof |
CN108073914B (en) * | 2018-01-10 | 2022-02-18 | 成都品果科技有限公司 | Animal face key point marking method |
CN108416846A (en) * | 2018-03-16 | 2018-08-17 | 北京邮电大学 | It is a kind of without the three-dimensional registration algorithm of mark |
KR102276369B1 (en) * | 2019-12-27 | 2021-07-12 | 중앙대학교 산학협력단 | 3D Point Cloud Reliability Determining System and Method |
KR102334485B1 (en) * | 2020-08-20 | 2021-12-06 | 이마고웍스 주식회사 | Automated method for aligning 3d dental data and computer readable medium having program for performing the method |
KR102438093B1 (en) * | 2020-08-01 | 2022-08-30 | 센스타임 인터내셔널 피티이. 리미티드. | Method and apparatus for associating objects, systems, electronic devices, storage media and computer programs |
KR102455546B1 (en) * | 2020-09-25 | 2022-10-18 | 인천대학교 산학협력단 | Iterative Closest Point Algorithms for Reconstructing of Colored Point Cloud |
KR20220112072A (en) | 2021-02-03 | 2022-08-10 | 한국전자통신연구원 | Apparatus and Method for Searching Global Minimum of Point Cloud Registration Error |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101082988A (en) * | 2007-06-19 | 2007-12-05 | 北京航空航天大学 | Automatic deepness image registration method |
CN104021547A (en) * | 2014-05-17 | 2014-09-03 | 清华大学深圳研究生院 | Three dimensional matching method for lung CT |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011198330A (en) * | 2010-03-24 | 2011-10-06 | National Institute Of Advanced Industrial Science & Technology | Method and program for collation in three-dimensional registration |
CN102831382A (en) * | 2011-06-15 | 2012-12-19 | 北京三星通信技术研究有限公司 | Face tracking apparatus and method |
-
2015
- 2015-01-15 CN CN201510019073.XA patent/CN105844582B/en active Active
- 2015-12-15 KR KR1020150178967A patent/KR102220099B1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101082988A (en) * | 2007-06-19 | 2007-12-05 | 北京航空航天大学 | Automatic deepness image registration method |
CN104021547A (en) * | 2014-05-17 | 2014-09-03 | 清华大学深圳研究生院 | Three dimensional matching method for lung CT |
Non-Patent Citations (2)
Title |
---|
Optimal Step Nonrigid ICP Algorithms for Surface Registration;Brian Amberg;《IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20070630;1-9 |
基于非刚性ICP的三维人脸数据配准算法;林源;《清华大学学报 (自然科学版)》;20141231;第54卷(第3期);334-340 |
Also Published As
Publication number | Publication date |
---|---|
CN105844582A (en) | 2016-08-10 |
KR102220099B1 (en) | 2021-02-26 |
KR20160088226A (en) | 2016-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105844582B (en) | The register method and device of 3D rendering data | |
CN103345736B (en) | A kind of virtual viewpoint rendering method | |
CN105531998B (en) | For object detection and the method for segmentation, device and computer program product | |
US8290248B2 (en) | Determining disparity search range in stereo videos | |
CN108764048A (en) | Face critical point detection method and device | |
CN113823001A (en) | Method, device, equipment and medium for generating house type graph | |
CN105069804B (en) | Threedimensional model scan rebuilding method based on smart mobile phone | |
US20100208994A1 (en) | Filling holes in depth maps | |
CN104157010A (en) | 3D human face reconstruction method and device | |
CN109525847B (en) | Just noticeable distortion model threshold calculation method | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
WO2023138053A1 (en) | Track fusion method and apparatus for unmanned surface vehicle | |
CN104850847B (en) | Image optimization system and method with automatic thin face function | |
CN110211169B (en) | Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation | |
Shen et al. | A hierarchical horizon detection algorithm | |
CN106373128B (en) | Method and system for accurately positioning lips | |
WO2007108412A1 (en) | Three-dimensional data processing system | |
CN108257098A (en) | Video denoising method based on maximum posteriori decoding and three-dimensional bits matched filtering | |
US20070086659A1 (en) | Method for groupwise point set matching | |
CN112991358A (en) | Method for generating style image, method, device, equipment and medium for training model | |
US20170083787A1 (en) | Fast Cost Aggregation for Dense Stereo Matching | |
CN108629809B (en) | Accurate and efficient stereo matching method | |
KR101869605B1 (en) | Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information | |
CN114240954A (en) | Network model training method and device and image segmentation method and device | |
CN109492787A (en) | Appointment business handles method, apparatus, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |