CN107945221A - A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process - Google Patents

A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process Download PDF

Info

Publication number
CN107945221A
CN107945221A CN201711293626.6A CN201711293626A CN107945221A CN 107945221 A CN107945221 A CN 107945221A CN 201711293626 A CN201711293626 A CN 201711293626A CN 107945221 A CN107945221 A CN 107945221A
Authority
CN
China
Prior art keywords
mrow
mfrac
msubsup
feature
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711293626.6A
Other languages
Chinese (zh)
Other versions
CN107945221B (en
Inventor
邱钧
刘畅
吴丽娜
高姗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN201711293626.6A priority Critical patent/CN107945221B/en
Publication of CN107945221A publication Critical patent/CN107945221A/en
Application granted granted Critical
Publication of CN107945221B publication Critical patent/CN107945221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process, using perspective projection model and Scale-space theory, detect and extract the three-dimensional feature point of RGB D images;It is that each characteristic point is selected and describes region using its feature as four concentric circular regions in the center of circle for this needed for characteristic point using round rotational invariance;Calculate gradient modulus value and the direction of pixel respectively in each circle ring area and innermost center circle region, establish direction histogram, wherein, the annular region of " circle ring area " between two adjacent concentric circles;Utilization orientation histogram, establishes annular description, carries out feature description to each characteristic point, generate feature vector, and according to the Euclidean distance matching characteristic point between feature vector.By using method provided by the invention, three-dimensional feature expression is carried out to RGB D images and is matched, avoided the distribution of principal direction, reduce the dimension of feature vector, reduce calculation amount for subsequent characteristics matching, save the time, realizing the real-time of characteristic matching.

Description

A kind of three-dimensional scenic feature representation based on RGB-D images and high-precision matching process
Technical field
The present invention relates to computer vision and digital image processing field, more particularly to it is a kind of based on RGB-D images three Tie up scene characteristic expression and high-precision matching process.
Background technology
Local feature because having consistency to conversion such as graphical rule, rotation, illumination and visual angles, and in noise, block There is good matching performance under the influence of factor, be widely used in the neck such as images match, object identification and classification, image retrieval Domain.The extraction of local feature mainly includes characteristic point detection and feature description two steps of generation.Detect and walk relative to characteristic point Suddenly the detection algorithm selected, feature, which describes the feature that generation step uses and describes algorithm, has the local feature performance extracted More significant impact.Therefore, the emphasis that the Feature Descriptor with robustness and uniqueness is research how is built.
Construction feature description is the description to characteristic point neighborhood information, and construction feature describes on the basis of the description Vector.SIFT feature description that Lowe is proposed is Feature Descriptor the most classical, and it is according to feature that SIFT feature, which describes son, Vertex neighborhood gradient information generates 128 dimensional feature vectors.But SIFT feature description needs to be characterized one principal direction of distribution, with Ensure that description has rotational invariance.Distribution principal direction needs to calculate the gradient information in each feature vertex neighborhood, and SIFT Feature vector dimension is higher, computationally intensive, time-consuming, influences the real-time of algorithm.
Thus, it is desirable to have a kind of technical solution is come at least one in the drawbacks described above that overcomes or at least mitigate the prior art It is a.
The content of the invention
It is an object of the invention to provide a kind of three-dimensional scenic feature representation based on RGB-D images and high-precision match party Method is come at least one in the drawbacks described above that overcomes or at least mitigate the prior art.
To achieve the above object, the present invention provides a kind of scene three-dimensional feature expression based on RGB-D and is matched with high accuracy Method, wherein, including:
Step 101, using perspective projection model and Scale-space theory, detect and extract the three-dimensional feature of RGB-D images Point;
Step 102, it is that each characteristic point obtained by step 101 is selected using it to be round using round rotational invariance Feature of four concentric circular regions of the heart for this needed for characteristic point describes region;
Step 103, calculated respectively in each circle ring area and innermost center circle region determined by step 102 The gradient modulus value of pixel and direction, establish direction histogram;Wherein, the ring of " circle ring area " between two adjacent concentric circles Shape region;
Step 104, the direction histogram obtained using step 103, is established annular description, each characteristic point is carried out Feature describes, and generates feature vector, and according to the Euclidean distance matching characteristic point between feature vector.
Further, step 102 specifically includes:
To ensure that three-dimensional feature point has rotational invariance, using round rotational invariance, be each characteristic point select with It is the center of circle, and radius is respectively that feature point feature describes desired zone for this for 2,4,6,8 four concentric circular regions;When image is sent out During raw rotation, pixel can occur to change accordingly in characteristic point surrounding neighbors, but in same annulus pixel opposite position Put constant.
Further, step 103 specifically includes:
12 Direction intervals are divided into by 0 °~360 °, are counted respectively in each circle ring area on 12 Direction intervals Gradient accumulated value, establishes direction histogram, and the transverse axis of direction histogram is the direction of gradient, and the longitudinal axis is the corresponding gradient-norm in direction It is worth weighted accumulation value, wherein weighting function is calculation formula such as (13) formula and (14) formula of Gaussian function, gradient modulus value and direction It is shown:
Further, step 104 specifically includes:
Step 1041, a direction histogram is all corresponded to respectively by above-mentioned calculating, four donuts of characteristic point, by Longitudinal axis gradient modulus value weighted accumulation value in direction histogram, each annulus obtain 12 dimensional vectorsWhereinI=1,2,3,4;
Step 1042, from inside to outside, 12 dimensional vectors of most the inside center circle are takenThe 1st~12 as initial characteristics vector A element, takes 12 dimensional vectors close to first annulus of innermost center circleFor the 13rd~24 member of feature vector Element, and so on, up to 48 dimension initial characteristics vectors of RGB-D image three-dimensional characteristic points
Step 1043, to ensure rotational invariance, to initial characteristics vector DesIt is ranked up:OrderMark the largest component of Φ;If φ1It is largest component in Φ, then Not to initial characteristics vector DesDo any processing;If φ1It is not the largest component of Φ, it is assumed that φj=max { φ1, φ2..., φ11, φ12, then willRing shift left, untilIt is located atFirst place, i.e.,Together When pairCarry out withConsistent ring shift left, the initial characteristics vector after must sorting:
Step 1044, to strengthen matched robustness, secondary vector is distributed to characteristic point;
Step 1045, to reduce the influence of illumination, dimension normalization processing is carried out to initial characteristics vector, is generated final Feature description vectors;
Step 1046, after extracting three-dimensional feature point and generating feature description vectors, by the Euclidean distance between feature vector As similarity measurement, matching characteristic point.
Further, step 1044 specifically includes:
, will if having more than the component of Φ largest components 80% in other components of ΦBy this Component ring shift left, is located at first place;This feature point is just copied into multiple characteristic points, these characteristic point positions, scale are identical, Initial characteristics vector is different.
Further, " carrying out dimension normalization processing to initial characteristics vector " in step 1045 is specific such as (15) formula institute Show:
Further, step 101 specifically includes:
Step 111, using perspective projection model, the dimensionality reduction computational methods that RGB-D images protect three-dimensional geometrical structure is provided, are obtained Represented to parameter of the scene in camera coordinates system;
Step 112, by diffusion equation, using finite difference and Scale-space theory, detection RGB-D images three are established The RGB-D metric spaces of dimensional feature point;
Step 113, the extremum extracting on RGB-D metric spaces, obtains the position of characteristic point;And
Step 114, using sub- pixel interpolation method, precise positioning feature point, and low contrast and skirt response point are screened out, increased Strong characteristic matching stability and anti-noise ability.
Further, " parameter of the object point in camera coordinates system represents in scene " in step 111 is:
(1) in formula,It is coordinates of the object point P in camera coordinates system, (u, v) plane is imaging plane, and ω is phase The horizontal view angle of machine, W, H represent image I0Resolution ratio, D (u, v) be object point to camera horizontal distance.
Further, step 112 specifically includes:
According to Scale-space theory, image I0The Gaussian scale-space L (x, y, σ) of (x, y) be expressed as Gaussian function G (x, Y, σ) and original image I0Convolution, shown in following (2) formula:
L (x, y, σ)=G (x, y, σ) * I0(x, y), (2)
WhereinThen image I0Gaussian scale-space be equivalent to diffusion equation (3) just Value problem, i.e.,:
Diffusion equation (3) has unique solution* convolution is represented;
Using finite difference theory, obtaining the difference approximation form of diffusion equation initial-value problem includes:
To image I0Supporting domain Ω carry out discrete the Ω that step-length is hd, following difference component is introduced, obtains diffusion equation (3) difference form, and then RGB-D metric spaces are established, difference component is expressed as follows:
WhereinWithTo simplify symbol;
Similarly,Define it is similar, i.e.,:
Therefore, the discrete Second Order Differential Operator of Laplace operator L is introduced, there is following difference equation (9) formula:
Write (9) formula as matrix form, made, then the definition by derivative, (9) formula be similar to for (10) formula:
(10) in formula, τ is that the scale of image interlayer is poor, i.e. τ=σ(n+1)(n), by (10) formula iterative solution, you can establish RGB-D metric spaces.
Further, step 114 specifically includes:
For extreme point of the acquisition in the case of continuous, using sub- pixel interpolation method, precise positioning feature point, its is specific as follows:
Step 1141, F (u, v)=Af (u, v) is made, it is assumed that the extreme point obtained through above-mentioned extremum extracting is (u1, v1), then In this extreme point (u1, v1) F (u, v) Taylor is unfolded at place, and asks stationary point to obtain offset
Step 1142, according to offsetMiddle the important magnitude relationship location feature point with 0.5;
Stability and anti-noise ability are matched for Enhanced feature, screens out low contrast and skirt response point, its is specific as follows:
Step 1143, the characteristic point of low contrast in the characteristic point oriented is deleted;
Step 1144, the skirt response point in the characteristic point oriented is deleted;
Step 1145, by the screening of step 1143 and step 1144, the characteristic point remained is RGB-D images Stablize three-dimensional feature point.
There is rotational invariance since present invention utilizes circle, by selected using characteristic point as the center of circle, radius is preset value Multiple concentric circular regions be characterized description desired zone, so when image rotates, pixel in characteristic point surrounding neighbors Point can occur to change accordingly, but in same annulus pixel relative position it is constant, it is therefore not necessary to be characterized a distribution side To, it is possible to ensure that three-dimensional feature point has rotational invariance, so as to eliminate the step of being characterized distribution direction, Jin Erwei The calculating process that the feature that construction feature description uses describes algorithm saves the substantial amounts of time, is characterized the reality of description algorithm When property provides advantage.
Brief description of the drawings
Fig. 1 is the scene three-dimensional feature expression provided in an embodiment of the present invention based on RGB-D and high-precision matching process Flow chart.
Fig. 2 shows that parameter of the object point in camera coordinates system represents schematic diagram.
Fig. 3 shows the sample area and direction histogram of feature description.
Fig. 4 shows matching result of the example picture 1 under image translation, scaling and rotation transformation.
Fig. 4 a show matching result of the sample picture 1 when X-direction moves viewpoint along camera coordinates system.
Fig. 4 b show matching result of the sample picture 1 when Y direction moves viewpoint along camera coordinates system.
Fig. 4 c show matching result of the sample picture 1 when Z-direction moves viewpoint along camera coordinates system.
Fig. 4 d show matching result of the sample picture 1 when viewpoint is along optical axis rotation.
Fig. 5 shows matching result of the example picture 2 under image translation, scaling and rotation transformation.
Fig. 5 a show matching result of the sample picture 2 when X-direction moves viewpoint along camera coordinates system.
Fig. 5 b show matching result of the sample picture 2 when Y direction moves viewpoint along camera coordinates system.
Fig. 5 c show matching result of the sample picture 2 when Z-direction moves viewpoint along camera coordinates system.
Fig. 5 d show matching result of the sample picture 2 when viewpoint is along optical axis rotation.
Embodiment
In the accompanying drawings, represent same or similar element using same or similar label or have the function of same or like Element.The embodiment of the present invention is described in detail below in conjunction with the accompanying drawings.
In the description of the present invention, term " " center ", " longitudinal direction ", " transverse direction ", "front", "rear", "left", "right", " vertical ", The orientation or position relationship of the instruction such as " level ", " top ", " bottom " " interior ", " outer " are to be closed based on orientation shown in the drawings or position System, is for only for ease of and describes the present invention and simplify description, rather than indicates or imply that signified device or element must have Specific orientation, with specific azimuth configuration and operation, therefore it is not intended that limiting the scope of the invention.
The embodiment of the present invention provides a kind of scene three-dimensional feature expression based on RGB-D and high-precision matching process, such as Fig. 1 It is shown, comprise the following steps:
Step 101, using perspective projection model and Scale-space theory, detect and extract the three-dimensional feature of RGB-D images Point.
Step 102, it is that each characteristic point obtained by step 101 is selected using it to be round using round rotational invariance Feature of four concentric circular regions of the heart for this needed for characteristic point describes region.
Step 103, calculated respectively in each circle ring area and innermost center circle region determined by step 102 The gradient modulus value of sampled point and direction, establish direction histogram.Wherein, the ring of " circle ring area " between two adjacent concentric circles Shape region.
Step 104, the direction histogram obtained using step 103, is established annular description, each characteristic point is carried out Feature describes, and generates feature vector, and according to the Euclidean distance matching characteristic point between feature vector.
Just five steps of the present invention are described in detail separately below.
Step 101 specifically includes:
Step 111, using perspective projection model, the dimensionality reduction computational methods that RGB-D images protect three-dimensional geometrical structure is provided, are obtained Represented to parameter of the scene in camera coordinates system.
Step 112, by diffusion equation, using finite difference and Scale-space theory, detection RGB-D images three are established The RGB-D metric spaces of dimensional feature point.
Step 113, the extremum extracting on RGB-D metric spaces, obtains the position of characteristic point.
Step 114, using sub- pixel interpolation method, precise positioning feature point, and low contrast and skirt response point are screened out, increased Strong characteristic matching stability and anti-noise ability.
In step 111, RGB-D images can have the RGB-D cameras occurred on the market at present, the kinect of Microsoft, light field Camera etc. obtains.RGB-D images are two images:One is RGB Three Channel Color images, the other is Depth images. Depth images are similar to gray level image, and simply its each pixel value is the actual range of sensor distance object.But Wen Zhongti The image I arrived0Refer to RGB image, while image I0In each corresponding Depth of picture point will also realize that, i.e., " the figure hereinafter referred to As I0" it is the RGB image for carrying Depth information.
In step 111, perspective projection model is existing technology, and Fig. 2 is perspective projection schematic diagram, illustrates object point and picture Relation of the point in camera coordinates system.In Fig. 2, coordinate system OXYZ is camera coordinates system.Coordinate system O ' UV are imaging surface in camera Coordinate system.(u, v) plane is imaging plane,It is the point (abbreviation object point) in actual scene on object, p:M (u, v) For object pointCorresponding picture point.F represents camera photocentre O to distance, that is, image distance of imaging surface (u, v).D (u, v) is RGB The horizontal distance of the depth, i.e. object point P to camera of the corresponding actual scene object point P of image (u, v) place picture point p.ω is camera Horizontal view angle.W, H represent image I0Resolution ratio, with image I in figure0Center be coordinate origin, then image I0In O ' UV coordinates Scope in system is:
Also, it can derive that coordinate of the object point in camera coordinates system in scene is from Fig. 2,
That formula (1) provides is image I0Middle picture point corresponds to parameter list of the object point in camera coordinates system in actual scene Show.Wherein u, v are image I respectively0The subscript of middle pixel.
In step 111, " RGB-D images protect the dimensionality reduction computational methods of three-dimensional geometrical structure " specifically includes:
Using perspective projection, i.e. Fig. 2 in patent, RBG images and Depth images are combined, obtained actual field scenery Parameter expression of the body in camera coordinates system, i.e.,Function, the function not only contain RGB image half-tone information and The depth information of Depth images, and three-D space structure is converted into two dimensional image plane, RGB-D images are dropped Dimension processing, and the dimension-reduction treatment remains the three-dimensional geometrical structure information of object.
In step 112, known by Scale-space theory, image I0The Gaussian scale-space L (x, y, σ) of (x, y) is expressed as height This function G (x, y, σ) and original image I0Convolution, as shown in following formula (2):
L (x, y, σ)=G (x, y, σ) * I0(x, y), (2)
In formula (2)
Image I0Gaussian scale-space can also be expressed as diffusion equation initial-value problem, i.e., following (3) formula:
Diffusion equation (3) has unique solution* convolution is represented.Thus image can be believed The metric space of breath processing is connected with diffusion equation (3).
Further according to finite difference theory, to image I0Supporting domain Ω carry out discrete the Ω that step-length is hd.Introduce difference Amount, obtains the difference form of diffusion equation (3), and then establishes RGB-D metric spaces, and difference component is expressed as follows:
WhereinWithTo simplify symbol.Similarly,Define similar, i.e.
Therefore, the discrete Second Order Differential Operator of Laplace operator L is introduced, there is following difference equation (9) formula:
Write (9) formula as matrix form, made, then the definition by derivative, (9) formula be similar to for formula (10):
Wherein, τ is that the scale of image interlayer is poor, i.e. τ=σn+1n。σn+1And σnImage f is represented respectively(n+1)And f(n)Mould Paste degree, i.e. scale.Known by (10) formula, given image I0, the image after obscuring is gone out by (10) formula iterative solution, you can establish RGB- D metric spaces.
In step 113, because the Laplacian Function Extreme Value of dimension normalization is the same as other feature extraction functions (such as:Gradient, Hessian or Harris) compare, most stable of characteristics of image can be produced.And the present embodiment establishes RGB-D rulers Difference equation (9) formula in degree space is the approximation of the Laplacian function of dimension normalization again, so the present embodiment exists Extreme value is detected on RGB-D metric spaces, can obtain the potential characteristic point of image.
In step 114, obtained since the extreme value that step 113 obtains detects in the discrete case, and discontinuous situation Under extreme value.Be obtain it is continuous in the case of extreme point, it is necessary to by sub- pixel interpolation method, obtain it is continuous in the case of extreme value Point, and characteristic point is screened according to contrast and skirt response.
Step 114 specifically includes:
For extreme point of the acquisition in the case of continuous, using sub- pixel interpolation method, precise positioning feature point, its is specific as follows:
Step 1141, F (u, v)=Af (u, v) is made, it is assumed that the extreme point obtained through above-mentioned extremum extracting is (u1, v1), then In this extreme point (u1, v1) F (u, v) Taylor is unfolded at place, and asks stationary point to obtain offset
Step 1142, according to offsetMiddle the important magnitude relationship location feature point with 0.5;
Stability and anti-noise ability are matched for Enhanced feature, screens out low contrast and skirt response point, its is specific as follows:
Step 1143, the characteristic point of low contrast in the characteristic point oriented is deleted;
Step 1144, the skirt response point in the characteristic point oriented is deleted;
Step 1145, by the screening of step 1143 and step 1144, the characteristic point remained is RGB-D images Stablize three-dimensional feature point.
" F=Af Taylor are unfolded at this extreme point " in step 1141 is specific as follows:
In the above-mentioned extreme point (u detected1, v1) place Taylor expansion:
(11) in formula,For offset, Fu, FvRepresent that F (u, v) is inclined to variable u, the single order of v respectively Derivative, Fuu, FvvRepresent F (u, v) to variable u, the second-order partial differential coefficient of v, F respectivelyuvRepresent that F (u, v) is inclined to variable u, the mixing of v Derivative.
" stationary point is asked to obtain offset in step 1141" specific as follows:
Stationary point is asked to (11) formula, then is had
In step 1142 " according to offsetMiddle the important magnitude relationship location feature point with 0.5 " includes:
If offsetThe absolute value of middle whole component is both less than 0.5, retains this extreme point (u1, v1) and its offset And according to this extreme point (u1, v1) and offsetExtreme point (u, v) in the case of positioning is continuous;If offsetIn have absolute value Component more than 0.5, then the position for needing to replace extreme point as the case may be is (u1, v1) around pixel:
(1) ifIn | u-u1| > 0.5, i.e. u > u1+ 0.5 or u < u1- 0.5, then illustrate component u relative to relative to u1, closer to u1+ 1 or u1- 1, then it is continuous in the case of extreme point (u, v) closer to pixel (u1+ 1, v1) or (u1- 1, v1);Below in pixel (u1+ 1, v1) or (u1- 1, v1) place repeat step 1041-1042, and given highest number of repetition N. If within repeating the above steps 1041-1042N times, the corresponding offset of existing pixelMeet the absolute value of whole components Both less than 0.5, then retain this pixel, and according to this pixel and offsetExtreme point in the case of can positioning continuously; If the offset being calculated after repeating the above steps 1041-1042N timesStill there is the component that absolute value is more than 0.5, then directly Delete this pixel;
(2) forIn | v-v1| the situation of > 0.5, respective handling is done with above-mentioned (1).
The down-sampled factor is arranged to 2 in the present embodiment, when establishing RGB-D metric spaces (can be set according to actual conditions For other numerical value), according to offsetIt is middle it is important can precise positioning feature point with 0.5 magnitude relationship.
Step 1143 specifically includes:The extreme value at pinpoint characteristic point (u, v) place isGiven threshold value τ1If F (u, v) is less than threshold tau1, then it is assumed that this characteristic point is The characteristic point of low contrast, deletes, otherwise retains.
Step 1144 specifically includes:
Because marginal point is not easy to position, and to noise-sensitive, then need to delete edge effect.For skirt response point, profit With 2 × 2 Hessian matrix Hs at characteristic point (u, v) placeFTo screen characteristic point:
Calculate Hessian matrix HsFMark and determinant.Given threshold value τ2, judgeWhether it is less than If being less than, keeping characteristics point, is otherwise deleted.
In step 102, to ensure that three-dimensional feature point has rotational invariance, consider that circle has rotational invariance, be each Characteristic point is selected using it as the center of circle, radius be respectively 2,4,6,8 four concentric circular regions for this feature point feature description needed for Region.When image rotates, pixel can occur to change accordingly in characteristic point surrounding neighbors, but picture in same annulus The relative position of vegetarian refreshments is constant.Therefore, it is not necessary to it is characterized a distribution direction, you can make it have rotational invariance.Need to illustrate , the number of the concentric circles of the present embodiment is 4, can also be 2,3 or more, if without considering computational efficiency and subsequently The dimension of feature vector, the number of substantially concentric circle is The more the better.
As shown in figure 3, in step 103, in each circle ring area and innermost center circle area determined by step 102 In domain, the gradient magnitude of pixel and direction in each annulus of statistics with histogram are utilized respectively.Specifically include:
12 Direction intervals are divided into by 0 °~360 °, are counted respectively in each circle ring area on 12 Direction intervals Gradient accumulated value, establishes direction histogram.The transverse axis of direction histogram is the direction of gradient, and the longitudinal axis is the corresponding gradient-norm in direction It is worth weighted accumulation value, wherein weighting function is Gaussian function.Gradient modulus value and the calculation formula in direction such as (13) formula and (14) formula It is shown:
In step 104, when being described for feature, time-consuming and the high problem of feature vector dimension, the present invention examine in distribution direction Consider the rotational invariance of circle, propose annular feature description, comprise the following steps that:
Step 1041, a direction histogram is all corresponded to respectively by above-mentioned calculating, four donuts of characteristic point, by Longitudinal axis gradient modulus value weighted accumulation value in direction histogram, each annulus obtain 12 dimensional vectorsWhereinI=1,2,3,4.
Step 1042, from inside to outside, 12 dimensional vectors of innermost center circle are taken1st vectorial as initial characteristics~ 12 elements, take 12 dimensional vectors close to first annulus of innermost center circleFor the 13rd~24 of feature vector Element, and so on, up to 48 dimension initial characteristics vectors of RGB-D image three-dimensional characteristic points
Step 1043, to ensure rotational invariance, to initial characteristics vector DesIt is ranked up:OrderMark the largest component of Φ;If φ1It is largest component in Φ, then Not to initial characteristics vector DesDo any processing;If φ1It is not the largest component of Φ, it is assumed that φj=max { φ1, φ2..., φ11, φ12, then willRing shift left, untilIt is located atFirst place, i.e.,Together When pairCarry out withConsistent ring shift left, the initial characteristics vector after must sorting:
Step 1044, to strengthen matched robustness, secondary vector is distributed to characteristic point:If had more than in other components of Φ The component of Φ largest components 80%, then willBy this component ring shift left, first place is located at;The spy Sign point is just copied into multiple characteristic points, these characteristic point positions, scale are identical, and initial characteristics vector is different.
Step 1045, to reduce the influence of illumination, dimension normalization processing is carried out to feature vector, generates final feature Description vectors, " feature description vectors " are specific as shown in (15) formula:
Step 1046, after extracting three-dimensional feature point and generating feature description vectors, by the Euclidean distance between feature vector As similarity measurement, matching characteristic point.
If a characteristic point D of image to be matched, its feature vector is D=(d1, d2..., d47, d48), benchmark image The corresponding set of eigenvectors of characteristic point is:Scheme on the basis of i=1, wherein 2,3 ..., N, N The number of feature vector as in.The then corresponding feature vector D=(d of characteristic point D to be matched1, d2..., d47, d48) and reference map Euclidean distance as between all feature vectors is:
A threshold value λ is set, if minimum distance is less than this threshold value λ than time nearly Euclidean distance, ratio, then it is assumed that spy to be matched Levy point D Feature Points Matchings corresponding with minimum Eustachian distance in benchmark image.
Fig. 4 shows that the method that provides according to the present invention carries out the effect example 1 of characteristic matching, wherein (a) is viewpoint along phase Characteristic matching when X-axis moves in machine coordinate system Fig. 2 is as a result, (b) is spy of viewpoint when Y-axis moves along camera coordinates system Fig. 2 Levy matching result, (c) is characteristic matching of viewpoint when Z axis moves along camera coordinates system Fig. 2 as a result, (d) is viewpoint along phase Characteristic matching result during machine optical axis rotation.
Fig. 5 shows that the method that provides according to the present invention carries out the effect example 2 of characteristic matching, wherein (a) is viewpoint along phase Characteristic matching when X-axis moves in machine coordinate system Fig. 2 is as a result, (b) is spy of viewpoint when Y-axis moves along camera coordinates system Fig. 2 Levy matching result, (c) is characteristic matching of viewpoint when Z axis moves along camera coordinates system Fig. 2 as a result, (d) is viewpoint along phase Characteristic matching result during machine optical axis rotation.
It is last it is to be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.This The those of ordinary skill in field should be understood:It can modify to the technical solution described in foregoing embodiments, or it is right Which part technical characteristic carries out equivalent substitution;These modifications are replaced, and the essence of appropriate technical solution is departed from this Invent the spirit and scope of each embodiment technical solution.

Claims (10)

1. a kind of three-dimensional scenic feature representation based on RGB-D images and high-precision matching process, it is characterised in that including:
Step 101, using perspective projection model and Scale-space theory, detect and extract the three-dimensional feature point of RGB-D images;
Step 102, it is that each characteristic point obtained by step 101 is selected using it as the center of circle using round rotational invariance Feature of four concentric circular regions for this needed for characteristic point describes region;
Step 103, pixel is calculated respectively in each circle ring area and innermost center circle region determined by step 102 The gradient modulus value of point and direction, establish direction histogram;Wherein, the annulus of " circle ring area " between two adjacent concentric circles Domain;
Step 104, the direction histogram obtained using step 103, is established annular description, feature is carried out to each characteristic point Description, generates feature vector, and according to the Euclidean distance matching characteristic point between feature vector.
2. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 1 In step 102 specifically includes:
To ensure that three-dimensional feature point has rotational invariance, using round rotational invariance, be each characteristic point select using its as The center of circle, radius are respectively that feature point feature describes desired zone for this for 2,4,6,8 four concentric circular regions;When image revolves When turning, pixel can occur to change accordingly in characteristic point surrounding neighbors, but in same annulus pixel relative position not Become.
3. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 1 In step 103 specifically includes:
12 Direction intervals are divided into by 0 °~360 °, count the gradient on 12 Direction intervals in each circle ring area respectively Accumulated value, establishes direction histogram, and the transverse axis of direction histogram is the direction of gradient, and the longitudinal axis adds for the corresponding gradient modulus value in direction Weigh accumulated value, wherein weighting function be Gaussian function, the calculation formula in gradient modulus value and direction as (13) formula with shown in (14) formula:
<mrow> <mi>m</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mi>F</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>-</mo> <mi>F</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>F</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>-</mo> <mi>F</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>arctan</mi> <mfrac> <mrow> <mi>F</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>F</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
4. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 1 In step 104 specifically includes:
Step 1041, by above-mentioned calculating, four donuts of characteristic point all correspond to a direction histogram respectively, by direction Longitudinal axis gradient modulus value weighted accumulation value in histogram, each annulus obtain 12 dimensional vectorsWherein
Step 1042, from inside to outside, 12 dimensional vectors of most the inside center circle are takenThe 1st~12 member as initial characteristics vector Element, takes 12 dimensional vectors close to first annulus of innermost center circleFor the 13rd~24 element of feature vector, with This analogizes, up to 48 dimension initial characteristics vectors of RGB-D image three-dimensional characteristic points
Step 1043, to ensure rotational invariance, to initial characteristics vector DesIt is ranked up:OrderMark the largest component of Φ;If φ1It is largest component in Φ, then Not to initial characteristics vector DesDo any processing;If φ1It is not the largest component of Φ, it is assumed that φj=max { φ1, φ2..., φ11, φ12, then willRing shift left, untilIt is located atFirst place, i.e.,Together When pairCarry out withConsistent ring shift left, the initial characteristics vector after must sorting:
Step 1044, to strengthen matched robustness, secondary vector is distributed to characteristic point;
Step 1045, to reduce the influence of illumination, dimension normalization processing is carried out to initial characteristics vector, generates final feature Description vectors;
Step 1046, extract three-dimensional feature point and after generating feature description vectors, using the Euclidean distance between feature vector as Similarity measurement, matching characteristic point.
5. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 4 In step 1044 specifically includes:
, will if having more than the component of Φ largest components 80% in other components of ΦBy this component Ring shift left, is located at first place;This feature point is just copied into multiple characteristic points, these characteristic point positions, scale are identical, initially Feature vector is different.
6. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 4 In " carrying out dimension normalization processing to initial characteristics vector " in step 1045 is specific as shown in (15) formula:
7. the expression of the scene three-dimensional feature based on RGB-D and high-precision match party as any one of claim 1 to 6 Method, it is characterised in that step 101 specifically includes:
Step 111, using perspective projection model, the dimensionality reduction computational methods that RGB-D images protect three-dimensional geometrical structure is provided, must be shown up Parameter of the scape in camera coordinates system represents;
Step 112, by diffusion equation, using finite difference and Scale-space theory, it is special that detection RGB-D image three-dimensionals are established Levy the RGB-D metric spaces of point;
Step 113, the extremum extracting on RGB-D metric spaces, obtains the position of characteristic point;And
Step 114, using sub- pixel interpolation method, precise positioning feature point, and low contrast and skirt response point are screened out, enhancing is special Sign matching stability and anti-noise ability.
8. the three-dimensional scenic feature representation based on RGB-D images and high-precision matching process, its feature as claimed in claim 7 It is, " parameter of the object point in camera coordinates system represents in scene " in step 111 is:
<mrow> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mn>2</mn> <mi>u</mi> <mi> </mi> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mfrac> <mi>&amp;omega;</mi> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <mi>v</mi> <mfrac> <mi>H</mi> <mi>W</mi> </mfrac> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mfrac> <mi>&amp;omega;</mi> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mi>D</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
(1) in formula,It is coordinates of the object point P in camera coordinates system, (u, v) plane is imaging plane, and ω is the water of camera Angle is looked squarely, W, H represent image I0Resolution ratio, D (u, v) be object point to camera horizontal distance.
9. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 7 In step 112 specifically includes:
According to Scale-space theory, image I0The Gaussian scale-space L (x, y, σ) of (x, y) be expressed as Gaussian function G (x, y, σ) with Original image I0Convolution, shown in following (2) formula:
L (x, y, σ)=G (x, y, σ) * I0(x, y), (2)
WhereinThen image I0Gaussian scale-space be equivalent to diffusion equation (3) initial value and ask Topic, i.e.,:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>f</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>&amp;sigma;</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <msup> <mo>&amp;part;</mo> <mn>2</mn> </msup> <mi>f</mi> </mrow> <mrow> <msup> <mo>&amp;part;</mo> <mn>2</mn> </msup> <mi>x</mi> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msup> <mo>&amp;part;</mo> <mn>2</mn> </msup> <mi>f</mi> </mrow> <mrow> <msup> <mo>&amp;part;</mo> <mn>2</mn> </msup> <mi>y</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>f</mi> <msub> <mo>|</mo> <mrow> <mi>&amp;sigma;</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Diffusion equation (3) has unique solution* convolution is represented;
Using finite difference theory, obtaining the difference approximation form of diffusion equation initial-value problem includes:
To image I0Supporting domain Ω carry out discrete the Ω that step-length is hd, following difference component is introduced, obtains diffusion equation (3) Difference form, and then RGB-D metric spaces are established, difference component is expressed as follows:
<mrow> <msub> <mo>&amp;part;</mo> <mi>u</mi> </msub> <mi>f</mi> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mo>&amp;part;</mo> <msup> <mi>u</mi> <mo>+</mo> </msup> </msub> <mi>f</mi> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mo>+</mo> </msubsup> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mo>&amp;part;</mo> <msup> <mi>u</mi> <mo>-</mo> </msup> </msub> <mi>f</mi> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mo>-</mo> </msubsup> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mo>&amp;part;</mo> <mrow> <mi>u</mi> <mi>u</mi> </mrow> </msub> <mi>f</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&amp;part;</mo> <msup> <mi>u</mi> <mo>+</mo> </msup> </msub> <mi>f</mi> <mo>-</mo> <msub> <mo>&amp;part;</mo> <msup> <mi>u</mi> <mo>-</mo> </msup> </msub> <mi>f</mi> </mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mo>+</mo> </msubsup> <msubsup> <mi>r</mi> <mi>u</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mo>+</mo> </msubsup> <msubsup> <mi>r</mi> <mi>u</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mo>-</mo> </msubsup> <msubsup> <mi>r</mi> <mi>u</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mi>h</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>u</mi> <mo>-</mo> </msubsup> <msubsup> <mi>r</mi> <mi>u</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
WhereinWithTo simplify symbol;
Similarly,Define it is similar, i.e.,:
<mrow> <msub> <mo>&amp;part;</mo> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </msub> <mi>f</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&amp;part;</mo> <msup> <mi>v</mi> <mo>+</mo> </msup> </msub> <mi>f</mi> <mo>-</mo> <msub> <mo>&amp;part;</mo> <msup> <mi>v</mi> <mo>-</mo> </msup> </msub> <mi>f</mi> </mrow> <msubsup> <mi>r</mi> <mi>v</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mi>h</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>v</mi> <mo>+</mo> </msubsup> <msubsup> <mi>r</mi> <mi>v</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>v</mi> <mo>+</mo> </msubsup> <msubsup> <mi>r</mi> <mi>v</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>v</mi> <mo>-</mo> </msubsup> <msubsup> <mi>r</mi> <mi>v</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>-</mo> <mi>h</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>r</mi> <mi>v</mi> <mo>-</mo> </msubsup> <msubsup> <mi>r</mi> <mi>v</mi> <mrow> <mo>+</mo> <mo>-</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Therefore, the discrete Second Order Differential Operator of Laplace operator L is introducedThere is following difference equation (9) formula:
Write (9) formula as matrix form, madeAgain by the definition of derivative, (9) formula is similar to for (10) formula:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> </mrow> <mi>&amp;tau;</mi> </mfrac> <mo>=</mo> <msub> <mi>A</mi> <mi>n</mi> </msub> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
(10) in formula, τ is that the scale of image interlayer is poor, i.e. τ=σ(n+1)(n), by (10) formula iterative solution, you can establish RGB-D Metric space.
10. the scene three-dimensional feature expression based on RGB-D exists with high-precision matching process, its feature as claimed in claim 7 In step 114 specifically includes:
For extreme point of the acquisition in the case of continuous, using sub- pixel interpolation method, precise positioning feature point, its is specific as follows:
Step 1141, F (u, v)=Af (u, v) is made, it is assumed that the extreme point obtained through above-mentioned extremum extracting is (u1, v1), then herein Extreme point (u1, v1) F (u, v) Taylor is unfolded at place, and asks stationary point to obtain offset
Step 1142, according to offsetMiddle the important magnitude relationship location feature point with 0.5;
Stability and anti-noise ability are matched for Enhanced feature, screens out low contrast and skirt response point, its is specific as follows:
Step 1143, the characteristic point of low contrast in the characteristic point oriented is deleted;
Step 1144, the skirt response point in the characteristic point oriented is deleted;
Step 1145, by the screening of step 1143 and step 1144, the characteristic point remained is the stabilization of RGB-D images Three-dimensional feature point.
CN201711293626.6A 2017-12-08 2017-12-08 Three-dimensional scene feature expression and high-precision matching method based on RGB-D image Active CN107945221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711293626.6A CN107945221B (en) 2017-12-08 2017-12-08 Three-dimensional scene feature expression and high-precision matching method based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711293626.6A CN107945221B (en) 2017-12-08 2017-12-08 Three-dimensional scene feature expression and high-precision matching method based on RGB-D image

Publications (2)

Publication Number Publication Date
CN107945221A true CN107945221A (en) 2018-04-20
CN107945221B CN107945221B (en) 2021-06-11

Family

ID=61945295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711293626.6A Active CN107945221B (en) 2017-12-08 2017-12-08 Three-dimensional scene feature expression and high-precision matching method based on RGB-D image

Country Status (1)

Country Link
CN (1) CN107945221B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145708A (en) * 2018-06-22 2019-01-04 南京大学 A kind of people flow rate statistical method based on the fusion of RGB and D information
CN110349225A (en) * 2019-07-12 2019-10-18 四川易利数字城市科技有限公司 A kind of BIM model exterior contour rapid extracting method
CN110580497A (en) * 2019-07-16 2019-12-17 中国地质大学(武汉) Spatial scene matching method based on rotation invariance
CN110717497A (en) * 2019-09-06 2020-01-21 中国平安财产保险股份有限公司 Image similarity matching method and device and computer readable storage medium
CN110908512A (en) * 2019-11-14 2020-03-24 光沦科技(杭州)有限公司 Man-machine interaction method based on dynamic gesture coordinate mapping
CN111652085A (en) * 2020-05-14 2020-09-11 东莞理工学院 Object identification method based on combination of 2D and 3D features
CN113689403A (en) * 2021-08-24 2021-11-23 中国科学院长春光学精密机械与物理研究所 Feature description system based on inter-feature azimuth distance
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156870A (en) * 2011-04-12 2011-08-17 张小军 Device and extraction method for extracting invariant characteristics of local rotation of image
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points
US20150205792A1 (en) * 2014-01-22 2015-07-23 Stmicroelectronics S.R.L. Method for object recognition, corresponding system, apparatus and computer program product

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156870A (en) * 2011-04-12 2011-08-17 张小军 Device and extraction method for extracting invariant characteristics of local rotation of image
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points
US20150205792A1 (en) * 2014-01-22 2015-07-23 Stmicroelectronics S.R.L. Method for object recognition, corresponding system, apparatus and computer program product

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZDDHUB: "SIFT算法详解", 《HTTPS://BLOG.CSDN.NET/ZDDBLOG/ARTICLE/DETAILS/7521424》 *
李新德等: "一种基于2D和3D SIFT特征级融合的一般物体识别算法", 《电子学报》 *
柯翔等: "一种适用于室内服务机器人的实时物体识别系统", 《计算机系统应用》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145708A (en) * 2018-06-22 2019-01-04 南京大学 A kind of people flow rate statistical method based on the fusion of RGB and D information
CN109145708B (en) * 2018-06-22 2020-07-24 南京大学 Pedestrian flow statistical method based on RGB and D information fusion
CN110349225B (en) * 2019-07-12 2023-02-28 四川易利数字城市科技有限公司 BIM model external contour rapid extraction method
CN110349225A (en) * 2019-07-12 2019-10-18 四川易利数字城市科技有限公司 A kind of BIM model exterior contour rapid extracting method
CN110580497A (en) * 2019-07-16 2019-12-17 中国地质大学(武汉) Spatial scene matching method based on rotation invariance
CN110580497B (en) * 2019-07-16 2023-03-24 中国地质大学(武汉) Spatial scene matching method based on rotation invariance
CN110717497A (en) * 2019-09-06 2020-01-21 中国平安财产保险股份有限公司 Image similarity matching method and device and computer readable storage medium
CN110717497B (en) * 2019-09-06 2023-11-07 中国平安财产保险股份有限公司 Image similarity matching method, device and computer readable storage medium
CN110908512A (en) * 2019-11-14 2020-03-24 光沦科技(杭州)有限公司 Man-machine interaction method based on dynamic gesture coordinate mapping
CN111652085A (en) * 2020-05-14 2020-09-11 东莞理工学院 Object identification method based on combination of 2D and 3D features
CN113689403A (en) * 2021-08-24 2021-11-23 中国科学院长春光学精密机械与物理研究所 Feature description system based on inter-feature azimuth distance
CN113689403B (en) * 2021-08-24 2023-09-19 中国科学院长春光学精密机械与物理研究所 Feature description system based on inter-feature azimuth distance
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117132913B (en) * 2023-10-26 2024-01-26 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Also Published As

Publication number Publication date
CN107945221B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN107945221A (en) A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process
CN104376548B (en) A kind of quick joining method of image based on modified SURF algorithm
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN110288657B (en) Augmented reality three-dimensional registration method based on Kinect
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN108053367A (en) A kind of 3D point cloud splicing and fusion method based on RGB-D characteristic matchings
US7522163B2 (en) Method and apparatus for determining offsets of a part from a digital image
CN106940876A (en) A kind of quick unmanned plane merging algorithm for images based on SURF
CN106651942A (en) Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN104751465A (en) ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN108734657B (en) Image splicing method with parallax processing capability
CN105303615A (en) Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image
CN104881855B (en) A kind of multi-focus image fusing method of utilization morphology and free boundary condition movable contour model
CN108876723A (en) A kind of construction method of the color background of gray scale target image
CN108961286B (en) Unmanned aerial vehicle image segmentation method considering three-dimensional and edge shape characteristics of building
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
CN103034982A (en) Image super-resolution rebuilding method based on variable focal length video sequence
CN111340701A (en) Circuit board image splicing method for screening matching points based on clustering method
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN105427333A (en) Real-time registration method of video sequence image, system and shooting terminal
CN109671039B (en) Image vectorization method based on layering characteristics
CN107886101A (en) A kind of scene three-dimensional feature point highly effective extraction method based on RGB D

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Liu Chang

Inventor after: Qiu Jun

Inventor after: Wu Lina

Inventor after: Gao Pan

Inventor before: Qiu Jun

Inventor before: Liu Chang

Inventor before: Wu Lina

Inventor before: Gao Pan

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant