CN106548482B - Dense matching method and system based on sparse matching and image edges - Google Patents
Dense matching method and system based on sparse matching and image edges Download PDFInfo
- Publication number
- CN106548482B CN106548482B CN201610908117.9A CN201610908117A CN106548482B CN 106548482 B CN106548482 B CN 106548482B CN 201610908117 A CN201610908117 A CN 201610908117A CN 106548482 B CN106548482 B CN 106548482B
- Authority
- CN
- China
- Prior art keywords
- view
- point
- matching
- parallax
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a dense matching method and a dense matching system based on sparse matching and image edges, wherein the method comprises the following steps of S1, obtaining an edge image of a first view through edge detection; s2, establishing a feature vector for each pixel point of the first view and the second view; s3, determining feature points in the first view, searching matching points in the second view through sparse matching, and calculating the parallax value of corresponding pixel points; s4, calculating a parallax value of each unknown parallax pixel point in the first view to obtain an initial parallax image; and S5, filtering the initial parallax map by adopting a weight mean filtering algorithm to obtain an accurate dense parallax map. According to the method, the initial disparity map is obtained by combining the edge image obtained by edge detection and the feature point disparity value obtained by sparse matching, and then the initial disparity map is subjected to weight mean filtering processing to obtain the accurate dense disparity map, so that the speed is high, the efficiency is high, and the matching result is accurate.
Description
Technical Field
The invention relates to a dense matching method and a dense matching system based on sparse matching and image edges, which are applied to the field of stereoscopic vision.
Background
With the development of computer vision, the stereoscopic vision technology is widely applied to the aspects of robot navigation, intelligent transportation, military guidance and the like. Stereo vision is to calculate the three-dimensional coordinates of a space point by using the parallax value of the same point in the space on the planes of two cameras, and the acquisition of the parallax needs to be realized by stereo matching, so the stereo matching is one of the most important and difficult steps in the stereo vision measurement; in the field of stereoscopic vision, in order to calculate the visual difference, it is often necessary to match the left view and the right view of the binocular visual angle to obtain a dense disparity map; in the traditional dense matching method, the left disparity map and the right disparity map are required to be solved respectively, and then post-processing is carried out on the basis of the left disparity map and the right disparity map, so that the accurate dense disparity map can be obtained; that is, in order to obtain a dense disparity map, dense matching needs to be performed twice, and such a processing flow consumes a lot of time and is inefficient.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a dense matching method and a dense matching system based on sparse matching and image edges.
The purpose of the invention is realized by the following technical scheme: a dense matching method based on sparse matching and image edges comprises the following steps;
s1, performing edge detection on the first view by adopting an edge detection algorithm to obtain a first view edge image;
s2, establishing a feature vector for each pixel point of the first view and the second view;
s3, dividing the first view into R-R blocks, selecting the point with the maximum gradient in each block as a feature point, searching a corresponding matching point in the second view by adopting a matching algorithm for each feature point, and calculating the parallax value of the corresponding pixel point;
s4, for each pixel point in the first view, if the parallax value of the point is unknown, K nearest neighbor points with known parallax are searched around the point, the geodesic distance between the point and the K nearest neighbor points around the point is calculated through the first view edge image, then exponential weighted average is carried out according to the geodesic distance, the result of weighted average is the parallax estimation value of the point, and an initial parallax image is obtained;
and S5, filtering the initial parallax map by adopting a weight mean filtering algorithm to obtain an accurate dense parallax map.
Further, the first view is a left view, and the second view is a right view; or the first view is a right view and the second view is a left view.
Further, the feature vector may be composed of pixel luminance, pixel gradient, or pixel correlation.
The feature vector may also be composed of a combination of pixel luminance, pixel gradient, and pixel correlation.
Furthermore, the value range of the block side length R is 5-10.
Furthermore, the value range of the nearest neighbor point number K is 10-200.
Further, the matching algorithm is a sparse matching algorithm.
Further, the edge detection algorithm includes SED (Structured edge detection), a mallog operator or a Canny operator.
A sparse matching and image edge based dense matching system comprising:
the edge detection module is used for carrying out edge detection on the first view to obtain an edge detection image;
the characteristic vector establishing module is used for establishing a characteristic vector for each pixel point of the first view and the second view;
the sparse matching module is used for dividing the first view into R-R blocks, selecting the point with the maximum gradient in each block as a feature point, searching a corresponding matching point in the second view by adopting a matching algorithm for each feature point, and calculating the parallax value of the corresponding pixel point;
the dense interpolation module is used for traversing each pixel point in the first view, if the parallax value of the point is unknown, searching K nearest neighbor points with known parallax around the point, calculating the geodesic distance between the point and the K nearest neighbor points around the point, then carrying out weighted average in an exponential form according to the geodesic distance, wherein the result of the weighted average is the parallax estimation value of the point, and obtaining an initial parallax map;
and the filtering processing module is used for carrying out weight mean filtering processing on the initial disparity map to obtain an accurate dense disparity map.
The invention has the beneficial effects that: according to the method, the initial disparity map is obtained by combining the edge image obtained by edge detection and the feature point disparity value obtained by sparse matching, and then the initial disparity map is subjected to weight mean filtering processing to obtain the accurate dense disparity map, so that the speed is high, the efficiency is high, and the matching result is accurate.
Drawings
FIG. 1 is a flow chart of a dense matching method of the present invention;
fig. 2 is a schematic block diagram of a dense matching system of the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, a dense matching method based on sparse matching and image edges includes the following steps;
s1, performing edge detection on the first view by adopting an edge detection algorithm to obtain a first view edge image;
in an embodiment of the present application, the edge detection algorithm includes an SED, a marlog operator, or a Canny operator.
S2, establishing a feature vector for each pixel point of the first view and the second view;
in one embodiment, the feature vector may be composed of pixel luminance, pixel gradient, or pixel correlation. In another embodiment, the feature vector may also be composed of a combination of pixel luminance, pixel gradient, and pixel correlation.
S3, dividing the first view into R-R blocks, selecting the point with the maximum gradient in each block as a feature point, searching a corresponding matching point in the second view by adopting a matching algorithm for each feature point, and calculating the parallax value of the corresponding pixel point;
the matching algorithm is a sparse matching algorithm; the block side length R ranges from 5 to 10, and in some embodiments of the present application, the first view may be directly divided into 5 × 5 blocks with the same size for simplicity and convenience.
S4, for each pixel point in the first view, if the parallax value of the point is unknown, K nearest neighbor points with known parallax are searched around the point, the geodesic distance between the point and the K nearest neighbor points around the point is calculated through the first view edge image, then exponential weighted average is carried out according to the geodesic distance, the result of weighted average is the parallax estimation value of the point, and an initial parallax image is obtained; the value range of the nearest neighbor point number K is 10-200;
and S5, filtering the initial parallax map by adopting a weight mean filtering algorithm to obtain an accurate dense parallax map.
The method is mainly applied to the field of stereoscopic vision, and the accurate dense disparity map is obtained: the first view may be a left view of a binocular viewing angle, and correspondingly, the second image is a right view of the binocular viewing angle; the first view may also be a right view of a binocular viewing angle, and correspondingly, the second image is a left view of the binocular viewing angle.
As shown in fig. 2, a dense matching system based on sparse matching and image edge includes:
the edge detection module is used for carrying out edge detection on the first view to obtain an edge detection image;
the characteristic vector establishing module is used for establishing a characteristic vector for each pixel point of the first view and the second view;
the sparse matching module is used for dividing the first view into R-R blocks, selecting the point with the maximum gradient in each block as a feature point, searching a corresponding matching point in the second view by adopting a matching algorithm for each feature point, and calculating the parallax value of the corresponding pixel point;
the dense interpolation module is used for traversing each pixel point in the first view, if the parallax value of the point is unknown, searching K nearest neighbor points with known parallax around the point, calculating the geodesic distance between the point and the K nearest neighbor points around the point, then carrying out weighted average in an exponential form according to the geodesic distance, wherein the result of the weighted average is the parallax estimation value of the point, and obtaining an initial parallax map;
and the filtering processing module is used for carrying out weight mean filtering processing on the initial disparity map to obtain an accurate dense disparity map.
Claims (6)
1. A dense matching method based on sparse matching and image edges is characterized in that: comprises the following steps;
s1, performing edge detection on the first view by adopting an edge detection algorithm to obtain a first view edge image;
s2, establishing a feature vector for each pixel point of the first view and the second view, wherein the feature vector consists of pixel point brightness, pixel point gradient or pixel point correlation, or the combination of the pixel point brightness, the pixel point gradient and the pixel point correlation;
s3, dividing the first view into R-R blocks, selecting the point with the maximum gradient in each block as a feature point, searching a corresponding matching point in the second view by adopting a matching algorithm for each feature point, and calculating the parallax value of the corresponding pixel point;
the matching algorithm is a sparse matching algorithm;
s4, for each pixel point in the first view, if the parallax value of the point is unknown, K nearest neighbor points with known parallax are searched around the point, the geodesic distance between the point and the K nearest neighbor points around the point is calculated through the first view edge image, then exponential weighted average is carried out according to the geodesic distance, the result of weighted average is the parallax estimation value of the point, and an initial parallax image is obtained;
and S5, filtering the initial parallax map by adopting a weight mean filtering algorithm to obtain an accurate dense parallax map.
2. The dense matching method based on sparse matching and image edge as claimed in claim 1, wherein: the first view is a left view and the second view is a right view; or the first view is a right view and the second view is a left view.
3. The dense matching method based on sparse matching and image edge as claimed in claim 1, wherein: the value range of the block side length R is 5-10.
4. The dense matching method based on sparse matching and image edge as claimed in claim 1, wherein: the value range of the nearest neighbor point number K is 10-200.
5. The dense matching method based on sparse matching and image edge as claimed in claim 1, wherein: the edge detection algorithm comprises an SED operator, a Mark LOG operator or a Canny operator.
6. A dense matching system based on sparse matching and image edges is characterized in that: the method comprises the following steps:
the edge detection module is used for carrying out edge detection on the first view to obtain an edge detection image;
the characteristic vector establishing module is used for establishing a characteristic vector for each pixel point of the first view and the second view; the characteristic vector consists of pixel point brightness, pixel point gradient or pixel point correlation, or consists of the combination of the pixel point brightness, the pixel point gradient and the pixel point correlation;
the sparse matching module is used for dividing the first view into R-R blocks, selecting the point with the maximum gradient in each block as a feature point, searching a corresponding matching point in the second view by adopting a matching algorithm for each feature point, and calculating the parallax value of the corresponding pixel point;
the dense interpolation module is used for traversing each pixel point in the first view, if the parallax value of the point is unknown, searching K nearest neighbor points with known parallax around the point, calculating the geodesic distance between the point and the K nearest neighbor points around the point, then carrying out weighted average in an exponential form according to the geodesic distance, wherein the result of the weighted average is the parallax estimation value of the point, and obtaining an initial parallax map;
and the filtering processing module is used for carrying out weight mean filtering processing on the initial disparity map to obtain an accurate dense disparity map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610908117.9A CN106548482B (en) | 2016-10-19 | 2016-10-19 | Dense matching method and system based on sparse matching and image edges |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610908117.9A CN106548482B (en) | 2016-10-19 | 2016-10-19 | Dense matching method and system based on sparse matching and image edges |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106548482A CN106548482A (en) | 2017-03-29 |
CN106548482B true CN106548482B (en) | 2020-02-07 |
Family
ID=58369223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610908117.9A Active CN106548482B (en) | 2016-10-19 | 2016-10-19 | Dense matching method and system based on sparse matching and image edges |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106548482B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107493465B (en) * | 2017-09-18 | 2019-06-07 | 郑州轻工业学院 | A kind of virtual multi-view point video generation method |
CN110660088B (en) * | 2018-06-30 | 2023-08-22 | 华为技术有限公司 | Image processing method and device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8675997B2 (en) * | 2011-07-29 | 2014-03-18 | Hewlett-Packard Development Company, L.P. | Feature based image registration |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982334B (en) * | 2012-11-05 | 2016-04-06 | 北京理工大学 | The sparse disparities acquisition methods of based target edge feature and grey similarity |
CN103440653A (en) * | 2013-08-27 | 2013-12-11 | 北京航空航天大学 | Binocular vision stereo matching method |
CN104751455A (en) * | 2015-03-13 | 2015-07-01 | 华南农业大学 | Crop image dense matching method and system |
CN105528785B (en) * | 2015-12-03 | 2018-06-15 | 河北工业大学 | A kind of binocular vision image solid matching method |
-
2016
- 2016-10-19 CN CN201610908117.9A patent/CN106548482B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8675997B2 (en) * | 2011-07-29 | 2014-03-18 | Hewlett-Packard Development Company, L.P. | Feature based image registration |
Also Published As
Publication number | Publication date |
---|---|
CN106548482A (en) | 2017-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107424196B (en) | Stereo matching method, device and system based on weak calibration multi-view camera | |
CN104685513A (en) | Feature based high resolution motion estimation from low resolution images captured using an array source | |
WO2014073670A1 (en) | Image processing method and image processing device | |
CN108519102B (en) | Binocular vision mileage calculation method based on secondary projection | |
CN105043350A (en) | Binocular vision measuring method | |
CN106447661A (en) | Rapid depth image generating method | |
CN113888639B (en) | Visual odometer positioning method and system based on event camera and depth camera | |
CN104539934A (en) | Image collecting device and image processing method and system | |
CN103093479A (en) | Target positioning method based on binocular vision | |
CN112150518B (en) | Attention mechanism-based image stereo matching method and binocular device | |
Ranft et al. | Modeling arbitrarily oriented slanted planes for efficient stereo vision based on block matching | |
CN103971366A (en) | Stereoscopic matching method based on double-weight aggregation | |
CN110702015B (en) | Method and device for measuring icing thickness of power transmission line | |
CN105574875B (en) | A kind of fish eye images dense stereo matching process based on polar geometry | |
CN106548482B (en) | Dense matching method and system based on sparse matching and image edges | |
TWI528783B (en) | Methods and systems for generating depth images and related computer products | |
CN104236468A (en) | Method and system for calculating coordinates of target space and mobile robot | |
CN106447709A (en) | Rapid high-precision binocular parallax matching method | |
CN108399630B (en) | Method for quickly measuring distance of target in region of interest in complex scene | |
TW201715882A (en) | Device and method for depth estimation | |
CN108090930A (en) | Barrier vision detection system and method based on binocular solid camera | |
CN101777182B (en) | Video positioning method of coordinate cycling approximation type orthogonal camera system and system thereof | |
Xu et al. | A real-time ranging method based on parallel binocular vision | |
CN105491277B (en) | Image processing method and electronic equipment | |
CN110378905B (en) | Neural network and method for extracting and matching angular points and equipment calibration method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |