CN107481315A - A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms - Google Patents

A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms Download PDF

Info

Publication number
CN107481315A
CN107481315A CN201710516013.8A CN201710516013A CN107481315A CN 107481315 A CN107481315 A CN 107481315A CN 201710516013 A CN201710516013 A CN 201710516013A CN 107481315 A CN107481315 A CN 107481315A
Authority
CN
China
Prior art keywords
image
dimensional
algorithms
harris
sift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710516013.8A
Other languages
Chinese (zh)
Inventor
冯明驰
黄帅
郑太雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201710516013.8A priority Critical patent/CN107481315A/en
Publication of CN107481315A publication Critical patent/CN107481315A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Abstract

A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms, including step is claimed in the present invention:First, video camera is demarcated, tries to achieve the inside and outside parameter of video camera.Then, video data is obtained, processing is carried out to video using program and obtains image sequence.Feature point extraction is carried out to image by the way of Harris algorithms, SIFT algorithms and BRIEF algorithms are combined, and the characteristic point of extraction is matched using optical flow method.Subsequently, to multigroup matching double points collection of acquisition, with reference to the inside and outside parameter of demarcation, three-dimensional coordinate is calculated by principle of triangulation, the yardstick of reconstruction can be obtained according to camera height from the ground or other sensors, and then the three-dimensional information of environment is reconstructed, optimized using a kind of global bundle adjustment by the way of local bundle adjustment is combined.Finally, decision system is given the three-dimensional point cloud obstacle information of reconstruction, the size for allowing decision system to determine the steering of steering wheel and control throttle.Information in effective extraction image of the invention, makes its matching result more accurate.

Description

A kind of monocular vision three-dimensional environment based on Harris-SIFT-BRIEF algorithms is rebuild Method
Technical field
It is especially a kind of to be based on Harris-SIFT- the invention belongs to the three-dimensional reconstruction field of computer vision The monocular vision three-dimensional environment method for reconstructing of BRIEF algorithms.This method is mainly used in the three-dimensional reconstruction of environment in front of intelligent vehicle, tool There is wide application value.
Background technology
In recent years, three-dimensional reconstruction research and application achieve breakthrough progress, but still faces enormous challenge. What many fields used at present is still monocular vision, and its purpose is to simulate the principle of human visual system, with single shooting Machine simulates human eye from different angles, or even the two images from different space-time photographic subjects objects, then by a series of Processing means, the three-dimensional model structure of object is reconstructed according to the principle of triangulation of three-dimensional reconstruction.Common monocular vision obtains The plane information taken can not meet the needs of diversification, complication and precision, be unfavorable for intelligent vehicle to surrounding environment It is identified.Therefore, it is necessary to which the algorithm in being rebuild to three-dimensional environment is improved.
From the point of view of current correlative study, in the three-dimensional environment method for reconstructing based on monocular vision one it is most important and most A link for complexity is exactly images match.Image matching algorithm has Harris algorithms, STFT algorithms, BRIEF algorithms at present Deng.Image matching method based on Harris algorithms is applied in the reconstruction of monocular vision three-dimensional environment, computational simple, operation side Just, still, this method does not have scale invariability, and the angle point of extraction is Pixel-level, is unfavorable for online three-dimensional environment and rebuilds.Cause This, it is impossible to ensure the validity of three-dimensional reconstruction.Image matching method based on SIFT algorithms is applied to monocular vision three-dimensional environment In reconstruction, effectively solves scale invariability, still, this method easily causes error hiding, can not to the target of the smooth of the edge Accurate extraction characteristic point, real-time be not high.Image matching method based on BRIEF algorithms is applied to monocular vision three-dimensional environment weight In building, the sub- formation speed of description is greatly accelerated, is advantageous to accelerate the speed that three-dimensional environment is rebuild.But the invariable rotary of this method Property it is poor, easily it is affected by noise.However, the image matching method based on Harris-SIFT-BRIEF algorithms is applied to list In visually feeling that three-dimensional environment is rebuild, the advantage of Harris algorithms, SIFT algorithms and BRIEF algorithm threes is combined, improves three The precision rebuild is tieed up, strengthens robustness, real-time is higher, can quickly reconstruct intelligent vehicle current environment in a short time.
The content of the invention
Present invention seek to address that above problem of the prior art.The information in a kind of effective extraction image is proposed, is made The more accurate monocular vision three-dimensional environment method for reconstructing based on Harris-SIFT-BRIEF algorithms of its matching result.The present invention Technical scheme it is as follows:
A kind of monocular vision three-dimensional environment method for reconstructing based on Harris-SIFT-BRIEF algorithms, it includes following step Suddenly:
1), Binding experiment scene, video camera is demarcated, establishes camera review location of pixels and three-dimensional scenic position Between relation, obtain the inside and outside parameter of video camera;
2) video data and then by single camera vision system is gathered, application program carries out (noise reduction) processing to video, place Video after reason is converted into single-frame images sequence, and image preprocessing is carried out to the image sequence information of acquisition;
3), use as detection image using former frame as reference picture, present frame and propose a kind of Harris (with name A kind of point feature extraction operator of name) (binary robust is only by algorithm, SIFT (Scale invariant features transform) algorithms and BRIEF Vertical essential characteristic) new algorithm that is combined of algorithm carries out feature point extraction, choose the spy in previous frame image and current frame image Sign point carries out characteristic matching using optical flow method, forms multigroup matching double points than more rich place;
4), to multigroup matching double points of acquisition, with reference to the inside and outside parameter of demarcation, then, calculated by principle of triangulation Three-dimensional coordinate, the yardstick of reconstruction can be obtained according to camera height from the ground or other sensors, and then reconstruct environment Three-dimensional information, difference is carried out to the three-dimensional information of acquisition and gridding can be obtained by three-dimensional of each frame relative to former frame Information, using the three-dimensional information of continuous multiple frames as input, initial threedimensional model is established, by that analogy, it is every intelligent vehicle can be reconstructed The three-dimensional environment information at one moment;
5) it is, last, decision system is given moving target information in the three-dimensional environment of detection, allows decision system come the side of decision The size of steering and control throttle to disk.
Further, the foundation of the step 2) single camera vision system specifically includes:One common CCD camera is fixed On the roof of an intelligent vehicle, allow video camera with certain angle of depression down, measurement video camera to the height on ground be h, image The angle of depression of machine is β, and measuring the light of video camera, to pass right through the distance on bonnet to ground be d, builds single camera vision system, if The size of the picture of video camera shooting is u × v.
Further, the step 1) is demarcated to video camera, establishes camera review location of pixels and three-dimensional scenic Relation between position, the inside and outside parameter for obtaining video camera specifically include:
(1) large scale demarcation cloth, is chosen, the length that any one small square lattice on cloth are demarcated in measurement is l1
(2) demarcation cloth, is put by different orientation, ensures that camera can completely photograph demarcation cloth, obtains N images, N >=10, all images are loaded into Matlab calibration tools case, input l1Size, start calibrating camera, finally obtain shooting The intrinsic parameter K and outer parameter matrix [Rt] of machine, wherein K include principal point plane of delineation coordinate (cx,cy), and the x-axis side of video camera To focal length fxWith the focal length f in y-axis directiony
Further, the step 3) is using former frame as reference picture, and present frame is as detection image, using Harris Angle point in algorithm extraction image, first, I (x, y) is set to the pixel in image, obtains I (x, y) in x, the gradient in y directions Ix、Iy, the gradient product on x, y directions, I are obtained respectivelyx 2=Ix·Ix、Iy 2=Iy·Iy、Ixy=Ix·Iy, to Ix 2、Iy 2With IxyGauss weighting is carried out, then, obtains the Harris response R of each pixel, order is zero less than the response R of threshold value:R= {R:detM-α(traceM)2< t }, finally, 3 × 3 field non-maxima suppressions are carried out, to the angle point part extracted in image The point of maximum represents.
Further, the step 3) determines using SIFT algorithms principal direction and the position of characteristic point for the characteristic point of detection Put, the gradient direction distribution characteristic using characteristic point neighborhood territory pixel is each characteristic point assigned direction parameter;
Grad and direction of the formula (1) for (x, y) place, L are the chi that yardstick used is the respective place of each characteristic point Degree, in actual calculating process, is sampled in the neighborhood window centered on characteristic point, and counts adjacent with gradient orientation histogram The gradient direction of domain pixel, the scope of histogram of gradients is 0 °~360 °, wherein every 10 ° of posts, 36 posts altogether, and gradient side The principal direction of neighborhood gradient at this feature point, the i.e. direction as this feature point are then represented to the peak value of histogram;
After the principal direction of characteristic point determines, the angle point of extraction is represented with Local modulus maxima, then, carried out three-dimensional secondary Function is fitted accurately to determine the position of characteristic point and yardstick, and metric space function D (x, y, σ) is in Local Extremum (x0,y0,σ) Shown in the Taylor expansion at place such as formula (2).
Wherein X=(x, y, σ)T, to formula (2) derivation, and it is 0 to make its derivative, draws accurate extreme value place Xmax, such as Shown in formula (3):
Further, after the step 3) determines the position of characteristic point, established in feature vertex neighborhood using BRIEF algorithms Feature descriptor, specifically include:First, gaussian filtering is carried out to image, then, centered on characteristic point, takes S × S neighborhood Big window, a pair 3 × 3 of subwindow is randomly selected in big window, compare the pixel in subwindow and progress binary system tax Value;Randomly select N child windows in big window, repetition compare pixel in subwindow and, and then form a binary system and compile Code, this coding are exactly the description to characteristic point, i.e. Feature Descriptor.
Further, the Feature Descriptor that the step 3) is extracted to former frame and present frame uses optical flow method matching algorithm Matched, the matching algorithm is based on the image of former frame, and each pixel searched on current frame image is corresponding Pixel on previous frame image, matching algorithm finally obtain the disparity map of image pixel, choose previous frame image and present frame Characteristic point is than more rich place in image, can obtain three-dimensional of former frame and present frame two images based on this Match somebody with somebody, Stereo matching of all frames relative to former frame can be obtained by that analogy, ultimately form multigroup matching double points.
Further, rebuild according to step 4) three-dimensional environment mainly using principle of triangulation, using global light beam Method adjustment carries out the optimization of three-dimensional environment reconstruction cumulative errors with the mode that local bundle adjustment is combined.
Advantages of the present invention and have the beneficial effect that:
In three-dimensional reconstruction, image matching algorithm has a lot, but various method shortcomings are all obvious.The present invention proposes one Kind Harris-SIFT-BRIEF algorithms can carry out accurate feature points extraction with matching to image.Based on Harris-SIFT- BRIEF algorithms are computational simple, easy to operate, have consistency to rotation, scaling, brightness change, and visual angle is become Change, affine transformation, noise keep a certain degree of stability.In addition, this method can accurately extract characteristic point to image, soon Generation feature point description of speed, it is not easy to cause error hiding, real-time is higher, has stronger flexibility.This method application In being rebuild to three-dimensional environment, be advantageous to accelerate the speed that three-dimensional environment is rebuild, greatly reduce the registering time, improve three-dimensional ring The precision that border is rebuild, strengthen robustness, the three-dimensional environment being adaptive under the monocular vision under the conditions of changeable environment is rebuild.
The present invention proposes that a kind of monocular vision three-dimensional environment method for reconstructing based on Harris-SIFT-BRIEF algorithms can be with The context aware systems of intelligent vehicle are applied to, improve the recognition capability to intelligent vehicle surrounding environment, reduces computation complexity, is intelligence The navigation of energy car is prepared, and this is beneficial to intelligent vehicle can make corresponding behavior act as people to surrounding environment, and Make reasonable judgement.
Brief description of the drawings
Fig. 1 is that the three-dimensional environment of the monocular vision for the offer that the present invention provides preferred embodiment rebuilds flow chart;
Fig. 2 is the position that video camera provided by the invention is arranged on intelligent vehicle;
Fig. 3 is coverage of the video camera provided by the invention on intelligent vehicle;
Fig. 4 is demarcation cloth provided by the invention;
Fig. 5 is the stretch condition information of video camera shooting provided by the invention;
Fig. 6 is a kind of image characteristic point extraction new algorithm provided by the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, detailed Carefully describe.Described embodiment is only the part of the embodiment of the present invention.
The present invention solve above-mentioned technical problem technical scheme be:
It is as shown in Figure 1 the three-dimensional environment method for reconstructing flow chart provided by the invention based on monocular vision, mainly includes Following steps:Binding experiment scene, is demarcated to video camera, establish camera review location of pixels and three-dimensional scenic position it Between relation, obtain the inside and outside parameter of video camera;Then video data is gathered by monocular vision, with Video processing software pair Video is handled, and the video of acquisition is converted into single-frame images sequence, and the image sequence information progress image of acquisition is located in advance Reason;Afterwards, use as detection image using former frame as reference picture, present frame and propose that a kind of Harris (is named with name A kind of point feature extraction operator) algorithm, SIFT (Scale invariant features transform) algorithms and BRIEF (binary robust independence bases Eigen) new algorithm that is combined of algorithm carries out feature point extraction, choose the characteristic point in previous frame image and current frame image Compare abundant place, characteristic matching is carried out using optical flow method, forms multigroup matching double points.To multigroup matching double points of acquisition, knot Close the inside and outside parameter of demarcation, then, three-dimensional coordinate calculated by principle of triangulation, according to camera height from the ground or its Its sensor can obtain the yardstick of reconstruction, and then reconstruct the three-dimensional information of environment, and difference is carried out to the three-dimensional information of acquisition Three-dimensional information of each frame relative to former frame is can be obtained by with gridding, using the three-dimensional information of continuous multiple frames as input, is built Initial threedimensional model is found, by that analogy, the three-dimensional environment information at intelligent vehicle each moment can be reconstructed.Finally, detection Moving target information gives decision system in three-dimensional environment, allows decision system to determine the steering of steering wheel and control the big of throttle It is small.
The present invention is further illustrated below by way of concrete mode.It is specific as follows:
Step 1:Video camera is installed
One common CCD camera is fixed on an intelligent vehicle, allow video camera with certain angle of depression down, measurement is taken the photograph Camera is h to the height on ground, and the angle of depression of video camera is β.Measurement video camera light pass right through bonnet to ground away from From for d, single camera vision system is built, if the size of the picture of video camera shooting is u × v, as shown in Figure 2.Fig. 3 is seen by vertical view To position of the video camera on intelligent vehicle and video camera to the scanning range of environment.
Step 2:Camera calibration
Binding experiment scene, the view data that video camera transmits in real time is read in an rgb format by terminal, to taking the photograph Camera is demarcated, the relation established between camera review location of pixels and three-dimensional scenic position, obtains the internal reference of video camera Number K and outer parameter matrix [Rt].Inside and outside parameter is as shown in table 1.
The inside and outside parameter of table 1
2.1 choose 10 × 8 here, and demarcation cloth, the length that any one small square lattice on cloth are demarcated in measurement are on a large scale l1.As shown in Figure 4.
2.2 put demarcation cloth by different orientation, ensure that camera can completely photograph demarcation cloth, obtain N (N >=10) Image.All images are loaded into Matlab calibration tools case, input l1Size, start calibrating camera.Finally obtain shooting The intrinsic parameter K and outer parameter matrix [Rt] of machine, wherein K include principal point plane of delineation coordinate (cx,cy), and the x-axis side of video camera To focal length fxWith the focal length f in y-axis directiony
Step 3:Image obtains and processing
Using above intelligent vehicle a video camera shoot stretch condition information, application program to the video of acquisition at Reason, makes it be converted into single-frame images sequence, is designated as I (xi,yj), as shown in figure 5, locating in advance to the image sequence progress image of acquisition Reason.
Step 4:Image characteristics extraction is with matching
The pretreatment image that the camera interior and exterior parameter and step 3 obtained according to step 2 obtains, image is proposed a kind of Harris (operator is extracted with a kind of point feature of name name) algorithm, SIFT (Scale invariant features transform) algorithms and BRIEF The mode that (binary robust independence essential characteristic) algorithm is combined carries out feature extraction.As shown in Figure 6.
4.1 for acquisition image using Harris algorithms extraction image in angle point.First, the pixel in image is set For I (x, y), I (x, y) is obtained in x, the gradient I in y directionsx、Iy, the gradient product on x, y directions, I are obtained respectivelyx 2=Ix· Ix、Iy 2=Iy·Iy、Ixy=Ix·Iy, to Ix 2、Iy 2And IxyCarry out Gauss weighting.Then, the Harris for obtaining each pixel rings Should value R, order is zero less than the response R of threshold value:R={ R:detM-α(traceM)2< t }.Finally, the non-pole in 3 × 3 fields is carried out Big value suppresses, and the angle point extracted in image is represented with the point of local maximum.
4.2 determine using SIFT algorithms principal direction and the position of characteristic point for the characteristic point of detection.It is adjacent using characteristic point The gradient direction distribution characteristic of domain pixel is each characteristic point assigned direction parameter, operator is possessed rotational invariance.
Grad and direction of the formula (1) for (x, y) place.L is the chi that yardstick used is the respective place of each characteristic point Degree.In actual calculating process, sampled in the neighborhood window centered on characteristic point, and count adjacent with gradient orientation histogram The gradient direction of domain pixel.The scope of histogram of gradients is 0 °~360 °, wherein every 10 ° of posts, 36 posts altogether.Gradient side The principal direction of neighborhood gradient at this feature point, the i.e. direction as this feature point are then represented to the peak value of histogram.
After the principal direction of characteristic point determines, the angle point of 4.1 extractions is represented with Local modulus maxima, then, carried out three-dimensional Quadratic function is fitted accurately to determine the position of characteristic point and yardstick, and metric space function D (x, y, σ) is in Local Extremum (x0, y0, σ) place Taylor expansion such as formula (2) shown in.
Wherein X=(x, y, σ)T.To formula (2) derivation, and it is 0 to make its derivative, draws accurate extreme value place Xmax, such as Shown in formula (3):
After 4.3 determine the position of characteristic point, feature descriptor is established using BRIEF algorithms in feature vertex neighborhood.First, Gaussian filtering is carried out to image, then, centered on characteristic point, S × S neighborhood big window is taken, is randomly selected in big window The subwindow of a pair of (two) 3 × 3, compare pixel and progress binary system assignment in subwindow.Randomly selected in big window N child windows, repetition compare pixel in subwindow and, and then form a binary coding, this coding is exactly to feature The description of point, i.e. Feature Descriptor.
4.4 pairs of former frames and the Feature Descriptor of present frame extraction are matched using matching algorithm, herein using light Stream method carries out characteristic matching, and the matching algorithm is each picture on search current frame image based on the image of former frame Vegetarian refreshments corresponds to the pixel on previous frame image, and matching algorithm finally obtains the disparity map of image pixel, chooses previous frame image With the more rich place of characteristic point ratio in current frame image, former frame and present frame two images can be obtained based on this Stereo matching, Stereo matching of all frames relative to former frame can be obtained by that analogy, ultimately forms multigroup matching double points.
Step 5:Three-dimensional environment is rebuild
The multigroup matching double points obtained according to step 4, with reference to the inside and outside parameter of demarcation, according to the disparity map of acquisition, use Principle of triangulation calculates three-dimensional coordinate, and the chi of reconstruction can be obtained according to camera height from the ground or other sensors Degree, and then the three-dimensional information of environment is reconstructed, difference is carried out to the three-dimensional information of acquisition and gridding can be obtained by each frame Relative to the three-dimensional information of former frame, using the three-dimensional information of continuous multiple frames as input, initial threedimensional model is established, by that analogy, The three-dimensional environment information at intelligent vehicle each moment can be reconstructed.Finally, moving target information in the three-dimensional environment of detection is sent To decision system, the size for allowing decision system to determine the steering of steering wheel and control throttle.
For reconstructing the three-dimensional point cloud come often with the presence of cumulative errors, it is therefore proposed that a kind of global bundle adjustment Optimized with the mode that local bundle adjustment is combined, improve the accuracy that three-dimensional environment is rebuild.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limited the scope of the invention. After the content for having read the record of the present invention, technical staff can make various changes or modifications to the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (8)

  1. A kind of 1. monocular vision three-dimensional environment method for reconstructing based on Harris-SIFT-BRIEF algorithms, it is characterised in that including Following steps:
    1), Binding experiment scene, video camera is demarcated, established between camera review location of pixels and three-dimensional scenic position Relation, obtain the inside and outside parameter of video camera;
    2) video data and then by single camera vision system is gathered, noise reduction process is carried out to video, the video after processing is converted For single-frame images sequence, image preprocessing is carried out to the image sequence information of acquisition;
    3), using former frame as reference picture, present frame is as detection image, using a kind of Harris algorithms of proposition, SIFT chis Degree invariant features become the new algorithm progress characteristic point that scaling method and BRIEF binary robust independence essential characteristic algorithms are combined and carried Take, choose the more rich place of characteristic point ratio in previous frame image and current frame image, characteristic matching is carried out using optical flow method, Form multigroup matching double points;
    4), to multigroup matching double points of acquisition, with reference to the inside and outside parameter of demarcation, then, calculated by principle of triangulation three-dimensional Coordinate, the yardstick of reconstruction can be obtained according to camera height from the ground or other sensors, and then reconstruct the three of environment Information is tieed up, difference is carried out to the three-dimensional information of acquisition and gridding can be obtained by each frame and believe relative to the three-dimensional of former frame Breath, using the three-dimensional information of continuous multiple frames as input, initial threedimensional model is established, by that analogy, it is each intelligent vehicle can be reconstructed The three-dimensional environment information at moment;
    5) it is, last, decision system is given moving target information in the three-dimensional environment of detection, allows decision system to determine steering wheel Steering and control throttle size.
  2. 2. the monocular vision three-dimensional environment method for reconstructing according to claim 1 based on Harris-SIFT-BRIEF algorithms, Characterized in that, the foundation of the step 2) single camera vision system specifically includes:One common CCD camera is fixed on one On the roof of intelligent vehicle, allow video camera with certain angle of depression down, measurement video camera to ground height be h, video camera is bowed Angle is β, and measuring the light of video camera, to pass right through the distance on bonnet to ground be d, single camera vision system is built, if video camera The size of the picture of shooting is u × v.
  3. 3. the monocular vision three-dimensional environment reconstruction side according to claim 1 or 2 based on Harris-SIFT-BRIEF algorithms Method, it is characterised in that the step 2) is demarcated to video camera, establishes camera review location of pixels and three-dimensional scenic position Between relation, the inside and outside parameter for obtaining video camera specifically includes:
    3.1st, a wide range of demarcation cloth is chosen, the length that any one small square lattice on cloth are demarcated in measurement is l1
    3.2nd, put demarcation cloth by different orientation, ensure that camera can completely photograph demarcation cloth, obtain N images, N >= 10, all images are loaded into Matlab calibration tools case, input l1Size, start calibrating camera, finally obtain video camera Intrinsic parameter K and outer parameter matrix [Rt], wherein K includes principal point plane of delineation coordinate (cx,cy), and the x-axis direction of video camera Focal length fxWith the focal length f in y-axis directiony
  4. 4. the monocular vision three-dimensional environment method for reconstructing according to claim 3 based on Harris-SIFT-BRIEF algorithms, Characterized in that, the step 3) is carried using former frame as reference picture, present frame as detection image using Harris algorithms The angle point in image is taken, first, I (x, y) is set to the pixel in image, obtains I (x, y) in x, the gradient I in y directionsx、Iy, point The gradient product on x, y directions, I are not obtainedx 2=Ix·Ix、Iy 2=Iy·Iy、Ixy=Ix·Iy, to Ix 2、Iy 2And IxyCarry out Gauss weights, and then, obtains the Harris response R of each pixel, order is zero less than the response R of threshold value:R={ R:detM- α(traceM)2< t }, last t, 3 × 3 field non-maxima suppressions are carried out, to the angle point local maximum extracted in image Point represent.
  5. 5. the monocular vision three-dimensional environment method for reconstructing according to claim 4 based on Harris-SIFT-BRIEF algorithms, Characterized in that, the step 3) determines using SIFT algorithms principal direction and the position of characteristic point, profit for the characteristic point of detection It is each characteristic point assigned direction parameter with the gradient direction distribution characteristic of characteristic point neighborhood territory pixel;
    Grad and direction of the formula (1) for (x, y) place, L are the yardstick that yardstick used is the respective place of each characteristic point, In actual calculating process, sampled in the neighborhood window centered on characteristic point, and neighborhood picture is counted with gradient orientation histogram The gradient direction of element, the scope of histogram of gradients is 0 °~360 °, wherein every 10 1 posts, 36 posts, gradient direction are straight altogether The peak value of square figure then represents the principal direction of neighborhood gradient at this feature point, the i.e. direction as this feature point;
    After the principal direction of characteristic point determines, the angle point of extraction is represented with Local modulus maxima, then, carries out three-dimensional quadratic function Fit accurately to determine the position of characteristic point and yardstick, metric space function D (x, y, σ) is in Local Extremum (x0,y0, σ) place Shown in Taylor expansion such as formula (2).
    Wherein X=(x, y, σ)T, to formula (2) derivation, and it is 0 to make its derivative, draws accurate extreme value place Xmax, such as formula (3) shown in:
  6. 6. the monocular vision three-dimensional environment method for reconstructing according to claim 5 based on Harris-SIFT-BRIEF algorithms, Characterized in that, after the step 3) determines the position of characteristic point, establish feature using BRIEF algorithms in feature vertex neighborhood and retouch Symbol is stated, is specifically included:First, gaussian filtering is carried out to image, then, centered on characteristic point, takes S × S neighborhood big window, A pair 3 × 3 of subwindow is randomly selected in big window, compares pixel and progress binary system assignment in subwindow;In big window Randomly select N child windows in mouthful, repetition compare pixel in subwindow and, and then form a binary coding, this volume Code is exactly the description to characteristic point, i.e. Feature Descriptor.
  7. 7. the monocular vision three-dimensional environment method for reconstructing according to claim 6 based on Harris-SIFT-BRIEF algorithms, Characterized in that, the Feature Descriptor that the step 3) is extracted to former frame and present frame uses the progress of optical flow method matching algorithm Match somebody with somebody, the matching algorithm is based on the image of former frame, and each pixel searched on current frame image corresponds to former frame Pixel on image, matching algorithm finally obtain the disparity map of image pixel, choose in previous frame image and current frame image Characteristic point can obtain the Stereo matching of former frame and present frame two images, with this than more rich place based on this Stereo matching of all frames relative to former frame can be obtained by analogizing, and ultimately form multigroup matching double points.
  8. 8. the monocular vision three-dimensional environment method for reconstructing according to claim 7 based on Harris-SIFT-BRIEF algorithms, Characterized in that, rebuild according to step 4) three-dimensional environment mainly using principle of triangulation, using global bundle adjustment The optimization of three-dimensional environment reconstruction cumulative errors is carried out with the mode that local bundle adjustment is combined.
CN201710516013.8A 2017-06-29 2017-06-29 A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms Pending CN107481315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710516013.8A CN107481315A (en) 2017-06-29 2017-06-29 A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710516013.8A CN107481315A (en) 2017-06-29 2017-06-29 A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms

Publications (1)

Publication Number Publication Date
CN107481315A true CN107481315A (en) 2017-12-15

Family

ID=60594853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710516013.8A Pending CN107481315A (en) 2017-06-29 2017-06-29 A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms

Country Status (1)

Country Link
CN (1) CN107481315A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257161A (en) * 2018-01-16 2018-07-06 重庆邮电大学 Vehicle environmental three-dimensionalreconstruction and movement estimation system and method based on polyphaser
CN109089100A (en) * 2018-08-13 2018-12-25 西安理工大学 A kind of synthetic method of binocular tri-dimensional video
CN109344846A (en) * 2018-09-26 2019-02-15 联想(北京)有限公司 Image characteristic extracting method and device
CN109410325A (en) * 2018-11-01 2019-03-01 中国矿业大学(北京) A kind of pipeline inner wall three-dimensional reconstruction algorithm based on monocular image sequence
CN109872371A (en) * 2019-01-24 2019-06-11 哈尔滨理工大学 A kind of monocular vision three-dimensional rebuilding method based on improvement Sift algorithm
CN110049323A (en) * 2018-01-17 2019-07-23 华为技术有限公司 Coding method, coding/decoding method and device
CN110197104A (en) * 2018-02-27 2019-09-03 杭州海康威视数字技术股份有限公司 Distance measuring method and device based on vehicle
WO2020063987A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Three-dimensional scanning method and apparatus and storage medium and processor
CN110049323B (en) * 2018-01-17 2021-09-07 华为技术有限公司 Encoding method, decoding method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899918A (en) * 2015-05-14 2015-09-09 深圳大学 Three-dimensional environment modeling method and system for unmanned plane

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899918A (en) * 2015-05-14 2015-09-09 深圳大学 Three-dimensional environment modeling method and system for unmanned plane

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
丰一流: "SIFT图像匹配算法面向实时性的优化与实现", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 *
姜代红: "基于Harris-SIFT的图像自动快速拼接方法", 《复杂环境下监控图像拼接与识别》 *
孙苗苗 等: "基于图像拼接和帧间差分输电线路图像分割方法", 《红外技术》 *
杨刚: "基于单目视觉的相机运动估计和三维重建算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257161A (en) * 2018-01-16 2018-07-06 重庆邮电大学 Vehicle environmental three-dimensionalreconstruction and movement estimation system and method based on polyphaser
CN108257161B (en) * 2018-01-16 2021-09-10 重庆邮电大学 Multi-camera-based vehicle environment three-dimensional reconstruction and motion estimation system and method
CN110049323B (en) * 2018-01-17 2021-09-07 华为技术有限公司 Encoding method, decoding method and device
CN110049323A (en) * 2018-01-17 2019-07-23 华为技术有限公司 Coding method, coding/decoding method and device
CN110197104A (en) * 2018-02-27 2019-09-03 杭州海康威视数字技术股份有限公司 Distance measuring method and device based on vehicle
CN109089100A (en) * 2018-08-13 2018-12-25 西安理工大学 A kind of synthetic method of binocular tri-dimensional video
CN109089100B (en) * 2018-08-13 2020-10-23 西安理工大学 Method for synthesizing binocular stereo video
CN109344846A (en) * 2018-09-26 2019-02-15 联想(北京)有限公司 Image characteristic extracting method and device
WO2020063987A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Three-dimensional scanning method and apparatus and storage medium and processor
CN109410325A (en) * 2018-11-01 2019-03-01 中国矿业大学(北京) A kind of pipeline inner wall three-dimensional reconstruction algorithm based on monocular image sequence
CN109872371A (en) * 2019-01-24 2019-06-11 哈尔滨理工大学 A kind of monocular vision three-dimensional rebuilding method based on improvement Sift algorithm

Similar Documents

Publication Publication Date Title
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN104484668B (en) A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
CN107204010A (en) A kind of monocular image depth estimation method and system
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN106529538A (en) Method and device for positioning aircraft
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN106803262A (en) The method that car speed is independently resolved using binocular vision
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN106846367A (en) A kind of Mobile object detection method of the complicated dynamic scene based on kinematic constraint optical flow method
CN104574401A (en) Image registration method based on parallel line matching
CN107016646A (en) One kind approaches projective transformation image split-joint method based on improved
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN106295657A (en) A kind of method extracting human height's feature during video data structure
CN107067415B (en) A kind of object localization method based on images match
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN110008913A (en) The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171215

RJ01 Rejection of invention patent application after publication