CN102956031A - Device and method for acquiring three-dimensional scene information - Google Patents

Device and method for acquiring three-dimensional scene information Download PDF

Info

Publication number
CN102956031A
CN102956031A CN 201110241798 CN201110241798A CN102956031A CN 102956031 A CN102956031 A CN 102956031A CN 201110241798 CN201110241798 CN 201110241798 CN 201110241798 A CN201110241798 A CN 201110241798A CN 102956031 A CN102956031 A CN 102956031A
Authority
CN
China
Prior art keywords
image
unique point
point
undetermined
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110241798
Other languages
Chinese (zh)
Inventor
程懿远
王嘉
鲍东山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Nufront Mobile Multimedia Technology Co Ltd
Original Assignee
Beijing Nufront Mobile Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Nufront Mobile Multimedia Technology Co Ltd filed Critical Beijing Nufront Mobile Multimedia Technology Co Ltd
Priority to CN 201110241798 priority Critical patent/CN102956031A/en
Publication of CN102956031A publication Critical patent/CN102956031A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a device and a method for acquiring three-dimensional scene information. Matching accuracy rate can be increased by adding non-repeated grains to grain-free and grain repeated regions to enable characteristic vectors to be effective and unrepeated. The problem of matching inaccuracy in grain-free and grain repeated regions is solved. The matching accuracy rate of the grain-free and grain repeated regions can be fully increased without increasing algorithm complexity, and the problem of matching errors of the grain-free and grain repeated regions can be solved effectively.

Description

A kind of three-dimensional scene information acquisition methods and device
 
Technical field
The present invention relates to the 3D vision treatment technology, relate in particular to a kind of three-dimensional scene information acquisition methods and device.
 
Background technology
The ultimate principle of technique of binocular stereoscopic vision is to observe same scenery from two or more viewpoints, to obtain the image of object under different visual angles, obtains three-dimensional information by the position deviation between principle of triangulation computed image pixel (being parallax).A complete stereo visual system comprises Image Acquisition, camera calibration, feature extraction, Stereo matching, three-dimensional information recovery and subsequent processes etc.Wherein, feature extraction and Stereo matching are the gordian techniquies in the stereoscopic vision, also are difficult points, and its result's quality has a strong impact on the precision that follow-up three-dimensional information recovers.
Stereo matching is to seek the same space scenery one-to-one relationship between pixel in the projected image under different points of view.Different from common image template coupling, Stereo matching be image that two width of cloth or several exist viewpoint difference, geometry and tonal distortion and noise between carry out, do not have any standard form.When the space three-dimensional scene is projected as two dimensional image, be subjected to factors in the scene, impact such as illumination condition, scene geometry and physical characteristics, noise and distortion and camera properties etc., the image of same scenery under different points of view has a great difference, carry out unambiguous coupling to the image that has comprised so many unfavorable factor exactly, be rather difficult.
According to the difference of coupling primitive, algorithm for stereo matching is divided into three major types usually: based on Region Matching, based on characteristic matching with based on phase matching.This three classes algorithm judges that because of the difference of coupling primitive the theoretical foundation of corresponding point matching is also different, but still has some total constraint conditions between them.These constraint conditions comprise the most basic physical constraint that Marr proposes: unique constraints, consistency constraint and continuity constraint, and some specific match control constraints of amplifying out on the basis of three basic constraint.The utilization of these matching constraints not only can improve the accuracy of coupling, but also can reduce the workload of coupling, improves matching speed, thus so that the realistic application of Stereo Vision Measurement System.
Technique of binocular stereoscopic vision is in feature extraction and images match stage, in the prior art, can be summarized as according to the colouring information in the certain limit or the gradient information proper vector as this point, about seek the similar point of proper vector in two visual angles, as match point.But in the prior art, it is insurmountable from technical standpoint that two kinds of situations are arranged: 1, (color does not have discrepant zone without texture region, such as white wall): in this zone color evenly, without gradient information, mate so can't extract effective proper vector; 2, repeat texture region (zone that color repeats to change is such as the repetition stamp of knit goods): can extract effective proper vector in this zone, but because its repeatability, exist multiple spot to have identical proper vector, at matching stage, because proper vector is identical, and cause matching error.
 
Summary of the invention
In view of this, technical matters to be solved by this invention provides a kind of three-dimensional scene information acquisition methods and device.For there is a basic understanding some aspects to the embodiment that discloses, the below has provided simple summary.This summary part is not to comment general, neither determine the key/critical component or describe the protection domain of these embodiment.Its sole purpose is to present some concepts with simple form, with this preamble as following detailed description.
An aspect of of the present present invention provides a kind of three-dimensional scene information acquisition methods, gathers scene image by technique of binocular stereoscopic vision, comprising:
In three-dimensional scenic, add non-repetition texture;
Collection comprises the three-dimensional scene images of described non-repetition texture, image around the unique point undetermined is carried out pixel characteristic extract;
The unique point undetermined of two width of cloth images is determined unique point about coupling, obtains the three-dimensional information of this unique point.
In a kind of optional embodiment, described to add non-repetition texture in three-dimensional scenic be cast shadow, add laser or add the non-repetition texture of infrared light.
In a kind of optional embodiment, described pixel characteristic extract be to about two width of cloth images extract characteristics of image by pixel intensity and/or gradient information respectively.
In a kind of optional embodiment, the unique point undetermined of two width of cloth images is the characteristics of image according to feature extraction about described coupling, to the unique point arbitrary undetermined in the piece image, finds its corresponding point in another width of cloth image, forms the coupling mapping relations.
Of the present invention and provide on the other hand a kind of three-dimensional scene information acquiring unit, comprising:
Projecting cell: in three-dimensional scenic, add non-repetition texture;
Collecting unit: gather the three-dimensional scene images that comprises described non-repetition texture, image around the unique point undetermined is carried out pixel characteristic extract;
Analytic unit: the unique point undetermined of two width of cloth images about coupling, determine unique point, obtain the three-dimensional information of this unique point.
In a kind of optional embodiment, described projecting cell is cast shadow, interpolation laser or the non-repetition texture that adds infrared light.
In a kind of optional embodiment, described collecting unit be to about two width of cloth images extract characteristics of image by pixel intensity and/or gradient information respectively.
In a kind of optional embodiment, described analytic unit to the unique point arbitrary undetermined in the piece image, finds its corresponding point according to the characteristics of image that collecting unit extracts in another width of cloth image, forms the coupling mapping relations.
 
For above-mentioned and relevant purpose, one or more embodiment comprise the feature that the back will describe in detail and particularly point out in the claims.Below explanation and accompanying drawing describe some illustrative aspects in detail, and its indication only is some modes in the utilizable variety of way of principle of each embodiment.Other benefit and novel features will consider by reference to the accompanying drawings and become obviously along with following detailed description, and the disclosed embodiments are to comprise being equal to of all these aspects and they.
The present invention effectively and not repeats its proper vector by the mode to adding non-repetition texture without texture, repetition texture region, thereby reaches the purpose that improves matching accuracy rate.Can add any non-repetition texture to scene by projector, laser or any mode such as infrared.The invention solves in the face of without texture region and repetition texture region, mate inaccurate problem.Can under the prerequisite that does not increase algorithm complex, fully improve the coupling accuracy without texture region and repetition texture region, can effectively solve the problem without texture or repetition texture region matching error.
 
Figure of description
Fig. 1 is the stereo-picture that traditional binocular camera is taken;
Fig. 2 is the stereo-picture that traditional binocular camera is taken;
Fig. 3 is that the traditional double item stereo vision is without the texture image schematic diagram;
Fig. 4 is binocular stereo vision image schematic diagram of the present invention;
Fig. 6 is non-repetition texture schematic diagram among the present invention;
Fig. 6 is the present invention and traditional approach flow process comparison diagram;
Fig. 7 is binocular vision collection effect figure of the present invention;
Fig. 8 is apparatus of the present invention schematic diagram.
 
Embodiment
The following description and drawings illustrate specific embodiments of the present invention fully, to enable those skilled in the art to put into practice them.Other embodiments can comprise structure, logic, electric, process and other change.Embodiment only represents possible variation.Unless explicitly call for, otherwise independent assembly and function are optional, and the order of operation can change.The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.The scope of embodiment of the present invention comprises the gamut of claims, and all obtainable equivalents of claims.In this article, these embodiments of the present invention can be represented with term " invention " individually or always, this only is for convenient, and if in fact disclose and surpass one invention, not that the scope that will automatically limit this application is any single invention or inventive concept.
Select the frame zone to be without texture region among Fig. 1.As can be seen from Figure 1 A point and the peripheral color of B point are approximate, and periphery does not have obvious change color, therefore the proper vector of extracting by color and gradient can't distinguish A point and B point, thereby the point of the A among the left figure and B point can't correctly correspond to A1 point and B1 point among the right figure.
Select the frame zone for repeating texture region among Fig. 2.As can be seen from Figure 2 A point and the peripheral color of B point are approximate, and obvious change color is arranged, but the amplitude that changes is identical with direction.Therefore the proper vector of extracting by color and gradient can't distinguish A point and B point, thereby the point of the A among the left figure and B point can't correctly correspond to A1 point and B1 point among the right figure.
Fig. 3 is that the traditional double item stereo vision is without the texture image schematic diagram, its expression is without the contrast of texture region, because periphery does not have obvious change color, therefore by existing means, it is the proper vector that color and gradient are extracted, A1 point and B1 point can't be distinguished, in single image, all any bright spot can't be distinguished; Thereby when two width of cloth figure mate, also just can't the time A1 correspond to A2, B1 corresponds to B2.
In a kind of optional embodiment of the present invention, we can be by projection or other modes, add any non-repetition texture to scene, take through the binocular camera shooting head again, can obtain stereo-picture, as shown in Figure 4, expression namely be on the basis of Fig. 3, add the schematic diagram behind the non-repetition texture; The interpolation of non-repetition texture, its purpose mainly are to make unique point Pixel Information on every side to dissimilate; As can be seen from Figure 4, by the interpolation texture, A1 point, B1 point, A2 point, B2 point neighboring area graded difference, and change direction is different, as shown in phantom in FIG..The graded in A1 point and A2 point left side is slower, and the right side changes very fast; Obviously be different from the graded that B1 point and B2 are ordered, in this embodiment, by adding texture unique point undetermined graded on every side created a difference, thereby make the A1 point match the A2 point, the B1 point matches the B2 point.Like this, the proper vector by color and gradient are extracted can distinguish A1 point and B1 point, thereby the point of the A1 among the left figure and B1 point can correctly match A2 point and B2 point among the right figure.Except by the graded mode, in feature extraction and Stereo matching stage, there are a lot of modes to select in the prior art, the present invention is not construed as limiting this.
In some optional embodiments, as long as the shape that the non-repetition texture that adds in scene can be as shown in Figure 5 after adding texture, can make the scene image of repetition texture create a difference and get final product, and the present invention is not construed as limiting the form of non-repetition texture.
Fig. 6 has provided the process flow diagram that the present invention obtains three-dimensional scene information, and and traditional process between difference, the present invention effectively and not repeats its proper vector by the mode to adding non-repetition texture without texture, repetition texture region, thereby reaches the purpose that improves matching accuracy rate.Can add any non-repetition texture to scene by projector, laser or any mode such as infrared.The invention solves in the face of without texture region and repetition texture region, mate inaccurate problem.Can under the prerequisite that does not increase algorithm complex, fully improve the coupling accuracy without texture region and repetition texture region, can effectively solve the problem without texture or repetition texture region matching error.
With reference to Fig. 8, of the present invention and provide on the other hand a kind of three-dimensional scene information deriving means, comprising:
Projecting cell S01: in three-dimensional scenic, add non-repetition texture;
Collecting unit S02: gather the three-dimensional scene images that comprises described non-repetition texture, image around the unique point undetermined is carried out pixel characteristic extract;
Analytic unit S03: the unique point undetermined of two width of cloth images about coupling, determine unique point, obtain the three-dimensional information of this unique point.
In a kind of optional embodiment, described projecting cell S01 is cast shadow, interpolation laser or the non-repetition texture that adds infrared light.
In a kind of optional embodiment, described collecting unit S02 be to about two width of cloth images extract characteristics of image by pixel intensity and/or gradient information respectively.
In a kind of optional embodiment, described analytic unit S03 to the unique point arbitrary undetermined in the piece image, finds its corresponding point according to the characteristics of image that collecting unit S02 extracts in another width of cloth image, forms the coupling mapping relations.
In a kind of optional embodiment, as shown in Figure 7, provided the device design sketch that the present invention obtains three-dimensional scene information, wherein projector equipment can be placed on the optional position, as long as projection pattern can be captured by the binocular camera shooting head simultaneously.
Fig. 8 is the device schematic diagram that the present invention obtains three-dimensional scene information, and projecting cell S01 adds non-repetition texture in three-dimensional scenic, and its purpose mainly is to make unique point Pixel Information on every side to dissimilate; The image that includes non-repetition texture that collecting unit S02 will collect carries out feature extraction, determines to obtain the unique point of three-dimensional information; At last, analytic unit S03 carries out Stereo matching to two width of cloth images, namely can determine the three-dimensional information of unique point.
 
For making any technician in this area realize or to use the present invention, the above is described disclosed embodiment.To those skilled in the art; The various alter modes of these embodiment all are apparent, and the General Principle of this paper definition also can be applicable to other embodiment on the basis that does not break away from spirit of the present disclosure and protection domain.Therefore, the disclosure is not limited to the embodiment that this paper provides, but consistent with the widest scope of the disclosed principle of the application and novel features.

Claims (8)

1. a three-dimensional scene information acquisition methods gathers scene image by technique of binocular stereoscopic vision, it is characterized in that, comprising:
In three-dimensional scenic, add non-repetition texture;
Collection comprises the three-dimensional scene images of described non-repetition texture, image around the unique point undetermined is carried out pixel characteristic extract;
The unique point undetermined of two width of cloth images is determined unique point about coupling, obtains the three-dimensional information of this unique point.
2. the method for claim 1 is characterized in that, described to add non-repetition texture in three-dimensional scenic be cast shadow, add laser or add the non-repetition texture of infrared light.
3. the method for claim 1 is characterized in that, described pixel characteristic extract be to about two width of cloth images extract characteristics of image by pixel intensity and/or gradient information respectively.
4. the method for claim 1, it is characterized in that, the unique point undetermined of two width of cloth images is the characteristics of image according to feature extraction about described coupling, to the unique point arbitrary undetermined in the piece image, in another width of cloth image, find its corresponding point, form the coupling mapping relations.
5. a three-dimensional scene information deriving means is characterized in that, comprising:
Projecting cell: in three-dimensional scenic, add non-repetition texture;
Collecting unit: gather the three-dimensional scene images that comprises described non-repetition texture, image around the unique point undetermined is carried out pixel characteristic extract;
Analytic unit: the unique point undetermined of two width of cloth images about coupling, determine unique point, obtain the three-dimensional information of this unique point.
6. device as claimed in claim 5 is characterized in that, described projecting cell is cast shadow, interpolation laser or the non-repetition texture that adds infrared light.
7. device as claimed in claim 5 is characterized in that, described collecting unit be to about two width of cloth images extract characteristics of image by pixel intensity and/or gradient information respectively.
8. device as claimed in claim 5 is characterized in that, described analytic unit to the unique point arbitrary undetermined in the piece image, finds its corresponding point according to the characteristics of image that collecting unit extracts in another width of cloth image, forms the coupling mapping relations.
CN 201110241798 2011-08-22 2011-08-22 Device and method for acquiring three-dimensional scene information Pending CN102956031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110241798 CN102956031A (en) 2011-08-22 2011-08-22 Device and method for acquiring three-dimensional scene information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110241798 CN102956031A (en) 2011-08-22 2011-08-22 Device and method for acquiring three-dimensional scene information

Publications (1)

Publication Number Publication Date
CN102956031A true CN102956031A (en) 2013-03-06

Family

ID=47764809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110241798 Pending CN102956031A (en) 2011-08-22 2011-08-22 Device and method for acquiring three-dimensional scene information

Country Status (1)

Country Link
CN (1) CN102956031A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264556A (en) * 2019-06-10 2019-09-20 张慧 A kind of generation method without the random complex texture of repetition
CN110599531A (en) * 2019-09-11 2019-12-20 北京迈格威科技有限公司 Repetitive texture feature description method and device and binocular stereo matching method and device
CN110686687A (en) * 2019-10-31 2020-01-14 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip
WO2020173461A1 (en) * 2019-02-28 2020-09-03 深圳市道通智能航空技术有限公司 Obstacle detection method, device and unmanned air vehicle

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173461A1 (en) * 2019-02-28 2020-09-03 深圳市道通智能航空技术有限公司 Obstacle detection method, device and unmanned air vehicle
US12015757B2 (en) 2019-02-28 2024-06-18 Autel Robotics Co., Ltd. Obstacle detection method and apparatus and unmanned aerial vehicle
CN110264556A (en) * 2019-06-10 2019-09-20 张慧 A kind of generation method without the random complex texture of repetition
CN110599531A (en) * 2019-09-11 2019-12-20 北京迈格威科技有限公司 Repetitive texture feature description method and device and binocular stereo matching method and device
CN110599531B (en) * 2019-09-11 2022-04-29 北京迈格威科技有限公司 Repetitive texture feature description method and device and binocular stereo matching method and device
CN110686687A (en) * 2019-10-31 2020-01-14 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip
CN110686687B (en) * 2019-10-31 2021-11-09 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip

Similar Documents

Publication Publication Date Title
CN109816703B (en) Point cloud registration method based on camera calibration and ICP algorithm
CN106500627B (en) 3-D scanning method and scanner containing multiple and different long wavelength lasers
CN106500628B (en) A kind of 3-D scanning method and scanner containing multiple and different long wavelength lasers
CN107111880B (en) Disposition is blocked for computer vision
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
CN105100770B (en) Three-dimensional source images calibration method and equipment
KR100793076B1 (en) Edge-adaptive stereo/multi-view image matching apparatus and its method
EP3343506A1 (en) Method and device for joint segmentation and 3d reconstruction of a scene
CN106802138A (en) A kind of 3 D scanning system and its scan method
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN106296698B (en) A kind of lightning 3-D positioning method based on stereoscopic vision
CN102980526B (en) Spatial digitizer and the scan method thereof of coloured image is obtained with black and white camera
GB2458927B (en) 3D Imaging system
CN103854301A (en) 3D reconstruction method of visible shell in complex background
CN104408772A (en) Grid projection-based three-dimensional reconstructing method for free-form surface
CN103075960A (en) Multi-visual-angle great-depth micro stereo visual-features fusion-measuring method
CN106296789B (en) It is a kind of to be virtually implanted the method and terminal that object shuttles in outdoor scene
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
CN102956031A (en) Device and method for acquiring three-dimensional scene information
CN104794717A (en) Binocular vision system based depth information comparison method
CN106034213B (en) Generate the method, apparatus and system of light carving project content
CN103337064A (en) Method for removing mismatching point in image stereo matching
JP6285686B2 (en) Parallax image generation device
CN105662632A (en) Color information scanning device and method used for dental model
Cui et al. 3D body scanning with one Kinect

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130306