CN101388115B - Depth image autoegistration method combined with texture information - Google Patents

Depth image autoegistration method combined with texture information Download PDF

Info

Publication number
CN101388115B
CN101388115B CN200810224183XA CN200810224183A CN101388115B CN 101388115 B CN101388115 B CN 101388115B CN 200810224183X A CN200810224183X A CN 200810224183XA CN 200810224183 A CN200810224183 A CN 200810224183A CN 101388115 B CN101388115 B CN 101388115B
Authority
CN
China
Prior art keywords
image
depth image
pixel
registration
subsequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200810224183XA
Other languages
Chinese (zh)
Other versions
CN101388115A (en
Inventor
齐越
赵沁平
杨棽
沈旭昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN200810224183XA priority Critical patent/CN101388115B/en
Publication of CN101388115A publication Critical patent/CN101388115A/en
Application granted granted Critical
Publication of CN101388115B publication Critical patent/CN101388115B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A depth image automatic registration process combining with texture information is mainly used to reconstruct the three-dimensional model of various actual objects, which comprises the steps: firstly extracting a texture image from scanning data or generating a texture image according a depth image, secondly extracting interest pixels in the texture image based on the characteristics of SIFT, and finding out candidate sets therein matched with the interest pixels through the crosscheck method, thirdly finding out correct matched pixel pairs in the candidate sets according to geometry information constraint, fourthly finding out matched apex pairs corresponding to the matched pixel pairs in a three-dimensional space, and calculating the rigid replacement matrix between two depth images, fifthly optimizing the result through using the improved ICP algorism, sixthly dividing the input sequences of several depth images into a plurality of bar type subsequences based on the registrations of two depth images, finally combining the subsequences through the strategy of forward searching, and constructing an integral three-dimensional model. The invention can fast register large-scale three-dimensional scanning data to generate a three-dimensional model, and has strong anti-noise capability and excellent universality.

Description

A kind of depth image autoegistration method of combined with texture information
Technical field
The present invention relates to a kind of method of rebuilding the object dimensional model according to 3 d scan data automatically, a kind of depth image autoegistration method of combined with texture information particularly, belong to computer virtual reality and computer graphics techniques field, be mainly used in the reconstruction of various real-world object three dimensional lifelike models, particularly be applied to the reconstructing three-dimensional model of historical relic in the digital museum.
Background technology
Rebuild the three-dimensional model of object in the real world, by more and more widely be applied in aspects such as virtual emulation, computer animation, reverse engineering, computer-aided design (CAD) and digital museum.Along with the continuous development of 3-D scanning equipment, the data that collect based on 3-D scanning equipment are rebuild the model of real-world object, become a kind of popular three-dimensional modeling method gradually.The spatial digitizer of main flow not only can obtain the depth information on model surface summit, and can also obtain their colouring information, the depth image (range image) and the texture image (texture image) of resolution such as promptly can collect simultaneously, depth image be also referred to as cloud data (point cloud data, PCD).The geometric configuration of real-world object is often complicated, and the visual angle of spatial digitizer is limited, therefore, in order to obtain the whole geological information of three-dimensional object surface, need be in a plurality of different viewpoints scannings, the scan-data that will collect each time is stitched together again, reverts to a complete three-dimensional model, and this process is exactly the registration of scan-data.
At present, can be divided into two processes for the scan-data registration: two width of cloth view registrations (pair-wise registration) and several view registrations (multi-view registration).For two amplitude deepness images in the zone that overlaps, the purpose of two width of cloth view registrations is optimal motion matrixes of finding out between this two amplitude deepness image, under the situation of error minimum, wherein an amplitude deepness image is spliced on another width of cloth.The registration of two amplitude deepness images generally was divided into for two steps: rough registration (coarse registration) and meticulous registration (accurate registration).At first, calculate the estimated value of rigid body permutation matrix between two amplitude deepness images, then, optimize the result of back by minimizing a predefined error function in the meticulous registration stage in the rough registration stage.Similarly, the purpose of several data registrations is the given multi-amplitude deepness images of splicing, and it mainly comprises two aspects: find out the syntople between these depth images, the overlapping region is promptly arranged between which depth image and minimize global error.
In two kinds of thick registrations commonly used, the method of the rough registration of one class is a kind of local feature description of definition symbol, calculate the eigenwert of all points then, find out three pairs or above character pair point between two amplitude deepness images by comparing and mating these eigenwerts again, use Horn (referring to Horn at last, B.Closed-form solution of absolute orientation using unit quaternions.Journal of the Optical Society ofAmerica 4 (1987), 629-642.) method, just can calculate the rigid body permutation matrix between two width of cloth data.Common local feature description's symbol has: image rotating (spin images) is (referring to Huber, D.F.Automatic three-dimensional modeling from reality.PhD thesis, Carnegie Mellon University, 2002.Chair-Martial Hebert.), based on the descriptor of body surface derivative characteristic (referring to Gal R, Cohen-Or D.Salient geometric features for partial shape matching and similarity.ACM Trans.Graph.2006,25,130-150.), based on the descriptor of integration body (referring to Gelfand, N., Mitra, N.J., Guibas, L.J., and Pottmann, H.Robust global registration.In Symposium on Geometry Processing (2005), pp.197-206.) with based on the descriptor of metric space (referring to Li, X., and Guskov, I.Multi-scale features for approximate alignment of point-based surfaces.In SGP ' 05:Proceedings of the third Eurographics symposium on Geometry processing (Aire-la-Ville, Switzerland, Switzerland, 2005), Eurographics Association, p.217.); The method of another kind of rough registration is based on as random sampling consistance (RANdom SAmple Consensus, RANSAC) random algorithm of algorithm.For example Chen is (referring to Chen, C.-S., Hung, Y.-P., and Cheng, J.-B.Ransac-based darces:A new approach to fast auto-matic registration of partially overlapping range images.IEEE Trans.Pattern Anal.Mach.Intell.21,11 (1999), 1229-1234.) method that proposes, by on two amplitude deepness images, choosing some reference point in advance and introducing space constraints and improved the RANSAC algorithm, between registration speed and result's robustness, find an equilibrium point.
Although the character representation of existing local geometric features remains unchanged, even under the certain noise condition, also has robustness under rigid body translation.But, since low dimensional feature and high dimensional feature registration speed and stable aspect intrinsic contradictions and depth image in have noise and a large amount of geometrical defects, for the registration of large-scale data, use local geometric features to be difficult to obtain better effects merely.In addition, for any local feature description symbol, the eigenwert on each summit is all calculated according to its neighborhood inner vertex, therefore, for the different summit of body surface, if the geometric properties of their neighborhoods is similar, will obtain close eigenwert at these places, summit so, this makes the unique point of seeking coupling between two amplitude deepness images become the thing of a difficulty.The constraint condition that adds other is carried out beta pruning or feature is done polymerization all is the effective ways that address this problem, but they have increased the time complexity of Feature Points Matching algorithm greatly.
Consider that the research work at the two dimensional image registration has obtained achievement preferably, researchers begin to attempt the problem that the combined with texture image solves two amplitude deepness image registrations.Wherein Roth is (referring to Roth G.Registering two overlapping range images.In:3-D Digital Imaging and Modeling, Proceedings.Second International Conference on, 1999,191-200.) by the correct corresponding relation of triangle searching that the match interest summit is formed, the efficient of this method and accuracy are not high.Bendels is (referring to Bendels GH, Degener P, Wahl R, Kortgen M, Klein R.Image-based registration of 3d-range data using feature surface elements.In VAST, 2004,115-124.) behind the corresponding vertex of finding out the interest pixel, earlier feature extraction is done on these summits and neighborhood thereof, utilize the constraint of geological information to find out matching relationship correct between them again, the method of finally finishing registration .Bendels the scan-data quality better and between data angle change hour, can obtain registration results preferably.Seo is (referring to Seo JK, Sharp GC, Lee SW.Range data registration using photometric features.In:Computer Vision and Pattern Recognition, CVPR 2005,1140-145.) utilizing optical correction techniques to handle three-dimensional rotation and the projection distortion problem that exists between texture image on the Bendels method basis, but can only correcting plane or the texture in the zone of almost plane, so be difficult to surface geometry information complex objects.Liu Xiaoli (referring to Liu Xiaoli, Peng Xiang, Li Ameng, Gao Pengdong. the degree of depth picture coupling of combined with texture information..Computer-aided design (CAD) and graphics journal, 2007,19 (3), 340-345.) use the character pixel on the Sobel operator extraction texture image, find out the characteristic of correspondence pixel by normalized related coefficient then, at last by Hausdorff apart from check these character pixels right validity. this method relies on the information of texture image merely and finds out matched pixel, and needs man-machine interaction to select the overlapping region, only is applicable to that simple shooting angle changes the registration of little texture image.
In the meticulous registration of two amplitude deepness images, researchers have proposed many accurate method for registering, wherein that tool milestone significance is iterative closest point (Iterative Closest Point, ICP) algorithm is (referring to Besl, P.J., and McKay, N.D.Amethod for registration of 3-d shapes.IEEE Trans.Pattern Anal.Mach.Intell.14,2 (1992), 239-256. and Chen, Y., and Medioni, G.Object modelling by registration of multiple range images.Image Vision Comput.10,3 (1992), 145-155.).The ICP algorithm is an original state with the initial position of two amplitude deepness images, then by an iterative process, constantly reduce all corresponding point between two amplitude deepness images distance and, up to reaching certain threshold value, thereby calculate the kinematic matrix between two amplitude deepness images.ICP algorithm and its mutation generally need an original state preferably, therefore need the process of a rough registration to calculate the original state of the estimated value of rigid motion between two amplitude deepness images as accurate registration.
When the registration multi-amplitude deepness image, (V E), is called illustraton of model (model graph) can to set up between each amplitude deepness image of a whole model of expression the figure G of topological relation usually.Wherein, each vertex v of figure iRepresent an amplitude deepness image, each bar limit e I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the kinematic matrix between them also is assigned to this edge.When the set of the limit of structural map G, in the worst case, need judge whether they are overlapping to any two amplitude deepness images, algorithm time complexity at this moment is O (n 2).Pingi (Pingi P, Fasano A, Cignoni P, Montani C, Scopigno R.Exploiting the scanning sequence for automatic registration of large sets of range maps.Computer Graphics Forum 2005,24 (3), 517-526.) simplify this exhaustive algorithm by pre-defined a kind of scanning strategy.This about the map generalization tree representation of whole model make up the least member collection of complete models by these depth images, in case therefore G is fabricated out, just can generate the complete model of tree reconstruction by seeking of G.
But scanning and registration strategies towards band that Pingi proposes are too idealized, are difficult in the reality guarantee all to have the overlapping region between data adjacent arbitrarily in the range image sequence.
Summary of the invention
Technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, a kind of automatic deepness image registration method of combined with texture information is provided, scan-data that can the fast processing big data quantity generates three-dimensional model, and noise resisting ability is strong simultaneously.
Technical solution of the present invention: a kind of automatic deepness image registration method of combined with texture information, be divided into the registration of two amplitude deepness images and the registration of multi-amplitude deepness image, wherein:
The step of registration of two amplitude deepness images is as follows:
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R q,
The first step is extracted texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q
Second step is based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck;
The 3rd step, use adaptive RANSAC algorithm, it is right to find out matched pixel correct among the Candidate Set C according to geological information constraint;
In the 4th step, it is right to the coupling summit of correspondence to find out in three dimensions with matched pixel, and according to these summit corresponding relations, calculates the rigid body permutation matrix between two amplitude deepness images;
The 5th step, use the result of improved ICP algorithm optimization in the 4th step, finish the accurate registration of two amplitude deepness images;
The step of registration of several scan-datas is as follows:
If the illustraton of model of topological relation is that (V E), schemes each vertex v of G to G between the expression depth image i∈ V all represents an amplitude deepness image, each bar limit e of figure G I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the weights of this edge are from v iTo v jThe rigid body permutation matrix, S=(s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration;
In the 6th step,, the list entries S of multi-amplitude deepness image is divided into the banded subsequence S=of k (W based on two amplitude deepness image registrations 1, W 2..., W k), W wherein i∈ S, i=1...k are the subsequences of some amplitude deepness images, W iIn any two adjacent depth images the overlapping region is all arranged;
The 7th step, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged.
In the described first step, according to depth image R pAnd R qGenerate texture image I pAnd I qMethod be:
It is level and smooth that depth image is Laplacian, obtains its basic mode type, calculates the distance of each summit corresponding vertex on the basic mode type in the depth image again, and with the brightness value of each distance value as each summit respective pixel on texture image.
In described second step, based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and be by the method that the method for crosscheck is therefrom found out the right Candidate Set C of matched pixel:
(1) use the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels; For detected each the interest pixel p of SIFT algorithm, SIFT proper vector with sift (p) expression p, then define (p apart from d, q)=|| sift (p), sift (q) || represent the similarity degree between two interest pixel p and the q, more little then these two pixels of this distance are similar more, otherwise big more then their difference of distance are big more;
(2) according to the Euclidean distance between the SIFT proper vector of two pixels, at I pAnd I qThe interest set of pixels between seek corresponding pixel, its method is:
(2.1) deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel i in the texture image, if i is then deleted in the summit of i correspondence in depth image on the border of depth image grid;
(2.3) for I pIn each crucial pixel i p, at I qIn find out and make
Figure GSB00000453911100051
Two minimum crucial pixels
Figure GSB00000453911100052
With
Figure GSB00000453911100053
If Definition
Figure GSB00000453911100055
With
Figure GSB00000453911100056
Between diversity factor be
Figure GSB00000453911100057
And pre-defined diversity factor threshold value δ DiffIf,
Figure GSB00000453911100058
Then
Figure GSB00000453911100059
With
Figure GSB000004539111000510
Between notable difference is arranged,
Figure GSB000004539111000511
Can be used as i pRespective pixel, otherwise i pAt I qIn multiple correspondence is arranged, should give up i p, when
Figure GSB000004539111000512
The time, again with same method at I pIn find out pixel
Figure GSB000004539111000513
Respective pixel
Figure GSB000004539111000514
If
Figure GSB000004539111000515
And i pBe same pixel, then i pWith
Figure GSB000004539111000516
Be I pAnd I qBetween a pair of candidate's respective pixel, will Add Candidate Set C, otherwise still think i pAt I qIn do not have respective pixel.
In described the 3rd step, use adaptive RANSAC algorithm, right method is to find out matched pixel correct among the Candidate Set C according to geological information constraint:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) whether l element is correct coupling before among the check L, and the method for inspection is:
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other L-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex; The element of correct coupling is interior point, and remaining is an exterior point, and point is formed the unanimity collection of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above step (2.1)-(2.3), reach the sampling number upper limit, find out consistent sampling of concentrating element (to count out promptly at most) at most then, and the unanimity collection that will sample specifically is as the element set of correct coupling up to cycle index.
In described the 6th step, based on two amplitude deepness image registrations, the step that the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape is:
(1) with s 1Add first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S, if i=1...n-1 is s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
In described the 7th step, adopt the strategy of search forward, merge these subsequences, and be according to the step of the complete three-dimensional model of the subsequence structure that has merged:
(1) to subsequence W i∈ S, the every amplitude deepness image among the i=2...k is retrieved subsequence W successively jWhether ∈ S exists depth image and W among the j=i-1...1 iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) continuous circulation step (1) is until only remaining a subsequence, perhaps at residue subsequence W iIn do not have the overlapping region between any two;
(3) rebuild whole model according to figure G.
The present invention's beneficial effect compared with prior art is:
(1) the present invention extracts texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, compare with traditional method based on the local geometric features coupling, when texture image was known, the inventive method was not calculated the geometrical characteristic on each summit, had greatly improved speed, and there are the data of how much noises in the energy good treatment.
(2) from P and Q, extract texture image I pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, compare with existing method for registering based on texture information, when texture image was unknown, the inventive method can generate texture image according to the low-dimensional geometric properties on summit in the three dimensions, obviously has more versatility.
(3) compare with traditional method, the present invention is based on SIFT feature extraction texture image I based on the local geometric features coupling pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, can utilize method for registering images to find out corresponding relation between the summit, when handling the scan-data of complex model, have the obvious speed advantage.
(3) compare with the method for registering of existing combined with texture image, the inventive method adopts based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, promptly introduced process, and selected correct matched pixel in conjunction with depth image to the crosscheck of interest pixel, improved the accuracy of registration results.
(4) the present invention is divided into the list entries S of multi-amplitude deepness image the subsequence of several strips shape, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged, multi-amplitude deepness image for input, this method based on divide-and-conquer strategy is generation model figure fast, avoided all will doing any two amplitude deepness images in the classic method problem of registration, improve the speed of several view registrations greatly, therefore greatly reduced the time of whole several registration process.
Description of drawings
Fig. 1 is two amplitude deepness image registration process process flow diagrams of the present invention;
Fig. 2 is the autoregistration process of two amplitude deepness images of the embodiment of the invention, wherein: (a) be two depth images subject to registration, (b) be two width of cloth texture images that from scan-data, extract, wherein white point is represented the respective pixel between two width of cloth images, (c) stain in is the summit of the respective pixel correspondence in three dimensions between texture image, (d) is final registration results;
Fig. 3 is the registration process of the multi-amplitude deepness image of the embodiment of the invention, 13 amplitude deepness images for input, earlier all two adjacent width of cloth data are done registration, whole scanning sequence can be divided into 5 ribbon subsequences like this, merge this 5 subsequences then, finally generate a complete model.
Fig. 4,5,6,7 is respectively the registration results of the different model scan-datas that adopt the inventive method, wherein Fig. 4 is the registration results of a model of big foot, Fig. 5 is the registration results of No. two models of big foot, and Fig. 6 is the registration results of Buddha model, and Fig. 7 is the registration results of terra cotta warriors and horses model.
Embodiment
As shown in Figure 1, the present invention includes the registration process of registration He several scan-datas of two width of cloth scan-datas, specific as follows:
1. from scan-data, extract or generate texture image.
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R qIf comprised texture image among P and the Q, then directly extracted them and convert gray level image to as I pAnd I q,, then at first generate texture image according to depth image if do not comprise texture image among P and the Q or the texture image that comprises can't registration the time.
A given amplitude deepness image R, then
Figure GSB00000453911100071
Make that Γ is a kind of geometric properties descriptor,
Figure GSB00000453911100072
(eigenwert of v) representing the v place can be done linear transformation to the eigenwert on all summits to Γ, makes their codomain be [0,1], defines the texture image that is generated by Γ so
Figure GSB00000453911100073
Can generate a width of cloth 2 d texture image according to three-dimensional geometric properties Γ like this.Any in theory geometric properties descriptor all can be used to generate texture image, but considers the efficient of algorithm and result's accuracy, wishes that then Γ has good discrimination, and is easy to calculate.
The basic thought that the present invention generates texture image is: establishing the depth image that comprises in the scan-data is
Figure GSB00000453911100074
And
Figure GSB00000453911100075
Be to R 0Make the basic mode type that obtains after level and smooth for k time, wherein
Figure GSB00000453911100076
Be
Figure GSB00000453911100077
Corresponding point after level and smooth, definition so I=1...n, wherein
Figure GSB00000453911100079
Figure GSB000004539111000710
Be
Figure GSB000004539111000711
The normal vector at place.In the specific implementation, the present invention uses the Laplacian based on grid smoothly to obtain the basic mode type. because depth image R 0In contained the two-dimensional structure information of a cloud, so gridding R 0Be simply also fast.In smoothing process, topological relation remains unchanged between the summit, establishes M k=(V k, E) be R kCorresponding triangle gridding, wherein V kRepresent the set on summit and limit respectively with E,
Figure GSB00000453911100081
Be M kThe border.
Figure GSB00000453911100082
Then
Figure GSB00000453911100083
N (i)={ j| (i, j) ∈ E}, d wherein i=| N (i), in experiment, generally get k=64.
2. find out the Candidate Set of respective pixel between texture image.
(1) consider that the SIFT feature not only has unchangeability to the convergent-divergent and the rotation of image, simultaneously illumination variation, noise, block with less conditions such as viewpoint variation under robustness is preferably also arranged, the present invention uses the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels;
(2) at I pAnd I qThe interest set of pixels between seek corresponding pixel, for detected each the interest pixel p of SIFT algorithm, SIFT proper vector with sift (p) expression p, then define (p apart from d, q)=|| sift (p), sift (q) || represent the similarity degree between two interest pixel p and the q, more little then these two pixels of this distance are similar more, otherwise big more then their difference of distance are big more.Owing to may there be the complicacy of angle variation and texture image self between depth image; the simple Euclidean distance of SIFT proper vector of using is sought respective pixel as evaluation criterion; introduce a large amount of wrong couplings through regular meeting; simultaneously because the interest pixel of using the SIFT algorithm to find out is too much; the present invention adopts following steps to find out the Candidate Set C of respective pixel between two width of cloth texture images, and its detailed process is:
(2.1) for guaranteeing that the pixel in the result set all has corresponding summit in depth image, deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel p in the texture image, if p is then deleted in the summit of p correspondence in depth image on the border of depth image grid;
(2.3) find out I by the method for crosscheck pAnd I qBetween the Candidate Set of respective pixel.For I pIn each crucial pixel i p, at I qIn find out and make distance
Figure GSB00000453911100084
Two minimum crucial pixels
Figure GSB00000453911100085
With
Figure GSB00000453911100086
Might as well establish Definition
Figure GSB00000453911100088
With
Figure GSB00000453911100089
Between diversity factor be
Figure GSB000004539111000810
And pre-defined diversity factor threshold value δ DiffIf,
Figure GSB000004539111000811
Then think With
Figure GSB000004539111000813
Between notable difference is arranged,
Figure GSB000004539111000814
Can be used as i pRespective pixel, otherwise think i pAt I qIn multiple correspondence is arranged, should give up i p, when
Figure GSB000004539111000815
The time, again with same method at I pIn find out pixel
Figure GSB000004539111000816
Respective pixel
Figure GSB000004539111000817
If
Figure GSB000004539111000818
And i pBe same pixel, then think i pWith
Figure GSB000004539111000819
Be I pAnd I qBetween a pair of candidate's respective pixel, will Add Candidate Set C, otherwise, still think i pAt I qIn do not have respective pixel.In experiment, generally get δ Diff=0.2.
3. choose the set of correct matched pixel according to depth image
The coupling that still comprises mistake among the simple respective pixel Candidate Set C that uses the SIFT proper vector to find out, the pixel that method that need to introduce other is found out correct coupling among the C is right. under the known situation of the three-dimensional information of pixel corresponding vertex, can judge the correctness of coupling fully with three-dimensional constraint condition.The present invention uses a kind of adaptive RANSAC algorithm in conjunction with the three-dimensional information on summit, and the pixel of finding out correct coupling from Candidate Set C is right, and its method is:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) need in the three dimensions not three couple of conllinear coupling summit owing to recover minimum of a rigid body permutation matrix, thus the present invention only check among the L before l element whether be correct coupling, generally get l=50 in the experiment.
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to these 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other l-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex.The element of correct coupling is an interior point (inlier), and remaining is exterior point (outlier), and point is formed the unanimity collection (consensus set) of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above three steps, reach the sampling number upper limit up to cycle index, the unanimity collection of maximum sampling of counting out in finally getting as a result of collects the element set as correct coupling.
4. it is right to summit corresponding in depth image to find out correct matched pixel, and uses the estimated value that calculates rigid body permutation matrix between two amplitude deepness images based on the method for hypercomplex number.
Between depth image and texture image, there is mapping F p: R p→ I pAnd F q: R q→ I qBecause algorithm of the present invention can guarantee result set G in the 2nd, 3 steps rIn pixel in depth image, finding corresponding vertex, promptly
Figure GSB00000453911100091
Figure GSB00000453911100092
Figure GSB00000453911100093
Make
Figure GSB00000453911100094
Figure GSB00000453911100095
So in case find out C r, can be according to F pAnd F qFind out R qAnd R qBetween corresponding vertex set, find the solution the rigid body permutation matrix with the method for Horn again.
5. meticulous registration
In case calculate the estimated value of rigid body permutation matrix between two amplitude deepness images, just can use ICP algorithm or its mutation to be original state with this result, finish iteration optimization.In meticulous registration process, the present invention has improved classical ICP algorithm: at first, do not consider to be positioned on the depth image border and close on the summit on border in iterative process, can obtain better accuracy like this; Secondly, the distance of corresponding point and dynamically adjusts distance threshold during according to each iteration, like this in each iterative process, have only the interior corresponding point of threshold range be used to calculate new transformation matrix and the distance that minimizes corresponding point with.
In meticulous registration process, if the distance of corresponding point and not restraining, perhaps converge on a value that surpasses pre-defined threshold value, then thinking can't this two amplitude deepness image of registration, and promptly the result of rough registration is incorrect or do not have an overlapping region between this two amplitude deepness image.In the process of multi-amplitude deepness image registration, this criterion is used to differentiate two amplitude deepness images and whether has the overlapping region.
The step of registration of several scan-datas is as follows:
When registration n amplitude deepness image, the illustraton of model of establishing topological relation between the expression depth image is that (V E), wherein, schemes each vertex v of G to G i∈ V all represents an amplitude deepness image, | V|=n, each bar limit e of figure G I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the weights of this edge are from V iTo v jThe rigid body permutation matrix, easily know, if e I, jThere is then e J, iAlso exist, and its weights are e I, jInverse matrix. the generation tree representation of figure G makes up the least member collection of complete models by these depth images, therefore in case make up the G that publishes picture, just can set by the generation of searching G and rebuild complete model.
6. based on two amplitude deepness image registrations, the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape.
If S=is (s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration, when these data of registration, the present invention need set up illustraton of model G (V equally, E), although can do a registration to judge whether they have the overlapping region to any two amplitude deepness images among the S, if have then to calculate relevant rigid body permutation matrix, all limits among the G thereby structure is published picture.But when handling large-scale data, this is a task very consuming time.Under the scanning strategy of Pingi, s iWith s I-1And s I+1Between all have the overlapping region, so just can be successively to two adjacent amplitude deepness image s i, s I+1∈ S (i=1...n-1) does registration, the G thereby structure is published picture, and assurance figure G is communicated with.Yet this scanning of Pingi and the strategy of registration are too desirable, are difficult to guarantee two adjacent width of cloth data s among the S in practice i, s I+1(i=1...n-1) all there is the overlapping region between.The present invention supposes that the user can be according to a definite sequence when scanning, promptly finish scanning in banded mode, but per two scan stripes interbands are not necessarily continuous, be not necessarily to have the overlapping region between first amplitude deepness image of last amplitude deepness image of last band and next band, under this assumption, sequence S can be divided into k ribbon subsequence: S=(W 1, W 2..., W k),
Figure GSB00000453911100102
All satisfy the hypothesis of Pingi, i.e. W iIn any two adjacent depth images the overlapping region is all arranged.Concrete division methods is:
(1) with s 1Add first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S, if i=1...n-1 is s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
7. adopt a kind of strategy of search forward to merge these subsequences, and construct complete three-dimensional model.
(1) to subsequence W i∈ S, the every amplitude deepness image among the i=2...k is retrieved subsequence W successively jWhether ∈ S exists depth image and W among the j=i-1...1 iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) up to only remaining a subsequence, perhaps there is not the overlapping region in constantly cyclic process (1) between any two in the residue subsequence, the former shows that figure G is communicated with, and one that promptly can find out it generates tree and constructs complete model; And under one situation of back, a part of the equal representative model of each remaining subsequence, but be to use autoregistration algorithm of the present invention these parts can't be stitched together again.
(3) rebuild whole model according to the generation tree of figure G
Obviously, the strategy of Pingi proposition can be regarded a special case of the present invention as.With traditional O (n that needs any two width of cloth scan-datas of registration 2) several method for registering of time complexity compare, and adopt method of the present invention, the scan-data in the same subsequence can not done registration again, has therefore reduced the time of whole several registration process greatly.
As shown in Figure 2, (a) be two depth images subject to registration, (b) be two width of cloth texture images that from scan-data, extract, wherein white point is represented the respective pixel between two width of cloth images, (c) stain in is the summit of the respective pixel correspondence in three dimensions between texture image, (d) is final registration results; Fig. 2 has illustrated the process of two amplitude deepness image autoregistrations with example.
As shown in Figure 3,, earlier all two adjacent width of cloth data are done registration, whole scanning sequence can be divided into 5 ribbon subsequences like this, merge this 5 subsequences then, finally generate a complete model for 13 amplitude deepness images of input.
Be respectively the registration results of the different model scan-datas that adopt the inventive method as Fig. 4,5,6,7, Fig. 4 is the registration results of a model of big foot, Fig. 5 is the registration results of No. two models of big foot, and Fig. 6 is the registration results of Buddha model, and Fig. 7 is the registration results of terra cotta warriors and horses model.Wherein, comprised texture image in the scan-data of model of big foot and No. two models of big foot, can directly come registration based on known texture image, and do not comprise texture image in the scan-data of Buddha model and terra cotta warriors and horses model, need to generate texture image according to depth image earlier, come registration based on the texture image that generates again.From registration results, for comprising texture image and not comprising the scan-data of texture image, the present invention all can well registration they.

Claims (3)

1. the depth image autoegistration method of a combined with texture information is characterized in that: be divided into the registration of two amplitude deepness images and the registration of multi-amplitude deepness image, wherein:
The step of registration of two amplitude deepness images is as follows:
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R q,
The first step is extracted texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, wherein, according to depth image R pAnd R qGenerate texture image I pAnd I qMethod be:
It is level and smooth that depth image is Laplacian, obtains its basic mode type, calculates the distance of each summit corresponding vertex on the basic mode type in the depth image again, and with the brightness value of each distance value as each summit respective pixel on texture image;
Second step is based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, concrete step is:
(2.1) use the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels, for detected each the interest pixel p of SIFT algorithm, SIFT proper vector with sift (p) expression p, then define (p apart from d, q)=|| sift (p), sift (q) || represent the similarity degree between two interest pixel p and the q, more little then these two pixels of this distance are similar more, otherwise big more then their difference of distance are big more;
(2.2) according to the Euclidean distance between the SIFT proper vector of two pixels, at I pAnd I qThe interest set of pixels between seek corresponding pixel, its method is:
(2.2.1) deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2.2) for the interest pixel p in the texture image, if p is then deleted in the summit of p correspondence in depth image on the border of depth image grid;
(2.2.3) for I pIn each crucial pixel i p, at I qIn find out and make
Figure FSB00000510788000011
Two minimum crucial pixels
Figure FSB00000510788000012
With
Figure FSB00000510788000013
If
Figure FSB00000510788000014
Definition
Figure FSB00000510788000015
With
Figure FSB00000510788000016
Between diversity factor be
Figure FSB00000510788000017
And pre-defined diversity factor threshold value δ DiffIf,
Figure FSB00000510788000018
Then
Figure FSB00000510788000019
With
Figure FSB000005107880000110
Between notable difference is arranged,
Figure FSB000005107880000111
Can be as i pRespective pixel, otherwise i pAt I qIn multiple correspondence is arranged, should give up i p, when
Figure FSB000005107880000112
The time, again with same method at I pIn find out pixel
Figure FSB000005107880000113
Respective pixel
Figure FSB000005107880000114
If
Figure FSB000005107880000115
And i pBe same pixel, then i p
Figure FSB000005107880000116
With
Figure FSB000005107880000117
Be I pAnd I qBetween a pair of candidate's respective pixel, will
Figure FSB000005107880000118
Add Candidate Set C, otherwise still think i pAt I qIn do not have respective pixel;
The 3rd step, use adaptive RANSAC algorithm, it is right to find out matched pixel correct among the Candidate Set C according to geological information constraint, and concrete step is:
(3.1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(3.2) whether l element is correct coupling before among the check L, and the method for inspection is:
(3.2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(3.2.2) calculate the rigid body permutation matrix according to 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(3.2.3) according to the geological information of corresponding vertex, judge whether all the other l-3 element is correct coupling under this rigid body displacement; The element of correct coupling is interior point, and remaining is an exterior point, and point is formed the unanimity collection of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(3.2.4) repeat above step (3.2.1)-(3.2.3), reach the sampling number upper limit up to cycle index, it is maximum to find out the consistent element of concentrating then, maximum sampling of counting out in promptly, and the unanimity collection that will sample specifically is as the element set of correct coupling;
In the 4th step, it is right to the coupling summit of correspondence to find out in three dimensions with matched pixel, and according to these summit corresponding relations, calculates the rigid body permutation matrix between two amplitude deepness images;
The 5th step, use the result of improved ICP algorithm optimization in the 4th step, finish the accurate registration of two amplitude deepness images; Concrete step is: at first, do not consider to be positioned on the depth image border and close on the summit on border in iterative process, can obtain better accuracy like this; Secondly, the distance of corresponding point is dynamically adjusted distance threshold during according to each iteration, like this in each iterative process, have only the interior corresponding point of threshold range be used to calculate new transformation matrix and the distance that minimizes corresponding point with;
The step of registration of several scan-datas is as follows:
If the illustraton of model of topological relation is that (V E), schemes each vertex v of G to G between the expression depth image i∈ V all represents an amplitude deepness image, each bar limit e of figure G I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the weights of this edge are from v iTo v jThe rigid body permutation matrix, S=(s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration;
In the 6th step,, the list entries S of multi-amplitude deepness image is divided into the banded subsequence S=of k (W based on two amplitude deepness image registrations 1, W 2..., W k), W wherein i∈ S, i=1...k is the subsequence of some amplitude deepness images, W iIn any two adjacent depth images the overlapping region is all arranged;
The 7th step, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged.
2. the depth image autoegistration method of combined with texture information according to claim 1 is characterized in that: in described the 6th step, based on two amplitude deepness image registrations, the step that the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape is:
(1) with s 1Add first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S, if i=1...n-1 is s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
3. the depth image autoegistration method of combined with texture information according to claim 1, it is characterized in that: in described the 7th step, adopt the strategy of search forward, merge these subsequences, and be according to the step of the complete three-dimensional model of the subsequence structure that has merged:
(1) to subsequence W i∈ S, the every amplitude deepness image among the i=2...k is retrieved subsequence W successively jWhether ∈ S exists depth image and W among the j=i-1...1 iIn depth image the overlapping region is arranged, if there is such depth image, so just with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) continuous circulation step (1) is until only remaining a subsequence, perhaps at residue subsequence W iIn do not have the overlapping region between any two;
(3) rebuild whole model according to figure G.
CN200810224183XA 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information Expired - Fee Related CN101388115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810224183XA CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810224183XA CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Publications (2)

Publication Number Publication Date
CN101388115A CN101388115A (en) 2009-03-18
CN101388115B true CN101388115B (en) 2011-07-27

Family

ID=40477519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810224183XA Expired - Fee Related CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Country Status (1)

Country Link
CN (1) CN101388115B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570823A (en) * 2016-10-11 2017-04-19 山东科技大学 Planar feature matching-based point cloud crude splicing method

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697232B (en) * 2009-09-18 2012-03-07 浙江大学 SIFT characteristic reducing method facing close repeated image matching
CN102034231B (en) * 2009-09-25 2012-06-06 汉王科技股份有限公司 Image sequence registration method
CN101719282B (en) * 2009-11-11 2012-06-27 哈尔滨工业大学 Digital jointing method of three-dimensional object fractures based on fracture section
CN101916456B (en) * 2010-08-11 2012-01-04 无锡幻影科技有限公司 Method for producing personalized three-dimensional cartoon
CN101930618B (en) * 2010-08-20 2012-05-30 无锡幻影科技有限公司 Method for producing individual two-dimensional anime
CN102034274A (en) * 2010-12-10 2011-04-27 中国人民解放军国防科学技术大学 Method for calculating corresponding relationship between frame and frame in 3D point cloud sequence
CN102202159B (en) * 2011-03-29 2014-08-27 段连飞 Digital splicing method for unmanned aerial photographic photos
EP2697743A4 (en) 2011-04-11 2014-09-24 Intel Corp Gesture recognition using depth images
CN102208033B (en) * 2011-07-05 2013-04-24 北京航空航天大学 Data clustering-based robust scale invariant feature transform (SIFT) feature matching method
CN102945289B (en) * 2012-11-30 2016-01-06 苏州搜客信息技术有限公司 Based on the image search method of CGCI-SIFT local feature
US9083960B2 (en) * 2013-01-30 2015-07-14 Qualcomm Incorporated Real-time 3D reconstruction with power efficient depth sensor usage
CN103236081B (en) * 2013-04-25 2016-04-27 四川九洲电器集团有限责任公司 A kind of method for registering of colour point clouds
CN103236064B (en) * 2013-05-06 2016-01-13 东南大学 A kind of some cloud autoegistration method based on normal vector
CN103247040B (en) * 2013-05-13 2015-11-25 北京工业大学 Based on the multi-robot system map joining method of hierarchical topology structure
CN103279955B (en) * 2013-05-23 2016-03-09 中国科学院深圳先进技术研究院 Image matching method and system
KR102225617B1 (en) * 2014-11-03 2021-03-12 한화테크윈 주식회사 Method of setting algorithm for image registration
CN104408701B (en) * 2014-12-03 2018-10-09 中国矿业大学 A kind of large scene video image joining method
CN105913492B (en) * 2016-04-06 2019-03-05 浙江大学 A kind of complementing method of RGBD objects in images shape
CN106204710A (en) * 2016-07-13 2016-12-07 四川大学 The method that texture block based on two-dimensional image comentropy is mapped to three-dimensional grid model
CN109215110A (en) * 2017-07-21 2019-01-15 湖南拓视觉信息技术有限公司 Whole scene scanning means and 3-D scanning modeling
CN108520055A (en) * 2018-04-04 2018-09-11 天目爱视(北京)科技有限公司 A kind of product testing identification method compared based on temmoku point cloud
CN109035384A (en) * 2018-06-06 2018-12-18 广东您好科技有限公司 Pixel synthetic technology based on 3-D scanning, the automatic vertex processing engine of model
CN109410318B (en) * 2018-09-30 2020-09-08 先临三维科技股份有限公司 Three-dimensional model generation method, device, equipment and storage medium
CN112734821B (en) * 2019-10-28 2023-12-22 阿里巴巴集团控股有限公司 Depth map generation method, computing node cluster and storage medium
CN110946654B (en) * 2019-12-23 2022-02-08 中国科学院合肥物质科学研究院 Bone surgery navigation system based on multimode image fusion
CN115953406B (en) * 2023-03-14 2023-05-23 杭州太美星程医药科技有限公司 Matching method, device, equipment and readable medium for medical image registration

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1355277A2 (en) * 2002-04-18 2003-10-22 Canon Europa N.V. Three-dimensional computer modelling
CN1588424A (en) * 2004-07-02 2005-03-02 清华大学 Finger print identifying method based on broken fingerprint detection
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Method for recognizing iris with matched characteristic and graph based on partial bianry mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
EP1355277A2 (en) * 2002-04-18 2003-10-22 Canon Europa N.V. Three-dimensional computer modelling
CN1588424A (en) * 2004-07-02 2005-03-02 清华大学 Finger print identifying method based on broken fingerprint detection
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Method for recognizing iris with matched characteristic and graph based on partial bianry mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570823A (en) * 2016-10-11 2017-04-19 山东科技大学 Planar feature matching-based point cloud crude splicing method
CN106570823B (en) * 2016-10-11 2019-06-18 山东科技大学 Based on matched thick joining method of cloud of plane characteristic

Also Published As

Publication number Publication date
CN101388115A (en) 2009-03-18

Similar Documents

Publication Publication Date Title
CN101388115B (en) Depth image autoegistration method combined with texture information
US10109055B2 (en) Multiple hypotheses segmentation-guided 3D object detection and pose estimation
Goesele et al. Multi-view stereo for community photo collections
CN100559398C (en) Automatic deepness image registration method
WO2015139574A1 (en) Static object reconstruction method and system
CN108921895B (en) Sensor relative pose estimation method
US20200380711A1 (en) Method and device for joint segmentation and 3d reconstruction of a scene
Tung et al. Dynamic surface matching by geodesic mapping for 3d animation transfer
Li et al. Confidence-based large-scale dense multi-view stereo
CN112015275A (en) Digital twin AR interaction method and system
Ceylan et al. Factored facade acquisition using symmetric line arrangements
CN110910433A (en) Point cloud matching method based on deep learning
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
CN115239870A (en) Multi-view stereo network three-dimensional reconstruction method based on attention cost body pyramid
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN116310098A (en) Multi-view three-dimensional reconstruction method based on attention mechanism and variable convolution depth network
CN114782645A (en) Virtual digital person making method, related equipment and readable storage medium
Dai et al. A novel two-stage algorithm for accurate registration of 3-D point clouds
Ramirez et al. Booster: a benchmark for depth from images of specular and transparent surfaces
Lin et al. High-resolution multi-view stereo with dynamic depth edge flow
Wang et al. Fully automatic structure from motion with a spline-based environment representation
Li et al. Overview of 3d reconstruction methods based on multi-view
Lyra et al. Development of an efficient 3D reconstruction solution from permissive open-source code
CN111508063A (en) Three-dimensional reconstruction method and system based on image
Yano et al. Parameterized B-rep-Based Surface Correspondence Estimation for Category-Level 3D Object Matching Applicable to Multi-Part Items

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110727

Termination date: 20141024

EXPY Termination of patent right or utility model