CN104240219A - Method and system for allocating parallax values - Google Patents

Method and system for allocating parallax values Download PDF

Info

Publication number
CN104240219A
CN104240219A CN201310233783.3A CN201310233783A CN104240219A CN 104240219 A CN104240219 A CN 104240219A CN 201310233783 A CN201310233783 A CN 201310233783A CN 104240219 A CN104240219 A CN 104240219A
Authority
CN
China
Prior art keywords
region
parallax
pixel
value
distributed model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310233783.3A
Other languages
Chinese (zh)
Other versions
CN104240219B (en
Inventor
刘振华
刘媛
师忠超
鲁耀杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201310233783.3A priority Critical patent/CN104240219B/en
Publication of CN104240219A publication Critical patent/CN104240219A/en
Application granted granted Critical
Publication of CN104240219B publication Critical patent/CN104240219B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and system for allocating parallax values. In one embodiment, the method can include the steps of obtaining a reference image; dividing the obtained reference image into a plurality of areas; classifying the multiple areas obtained through division; allocating the parallax values to pixels in each area based on classifying results. According to the method and system for allocating the parallax values, the parallax values can be more accurately allocated, and accurate and dense parallax images are obtained.

Description

The method and system of configuration parallax value
Technical field
The present invention relates to image processing field, relate more specifically to the method and system configuring parallax value.
Background technology
In recent years, stereovision technique obtains extensive concern.The ultimate principle of stereoscopic vision is the information in conjunction with two (binoculars) or more viewpoint, with the image of same object under obtaining different visual angles, and the position deviation utilizing principle of triangulation to come between the pixel of computed image, thus obtain the steric information of object.This stereoscopic vision comprises the process such as Image Acquisition, camera calibration, feature extraction, Stereo matching, the degree of depth and interpolation, and wherein, the parallax information (depth information) obtained by Stereo Matching Technology can be used to estimate the relative distance between video camera and object.These parallax informations can be applied to many occasions, such as three-dimensional movie, robot, monitoring, Road Detection, pedestrian detection, automatic Pilot, intelligent vehicle control etc. based on 3-D technology.Such as, in intelligent vehicle control, based on the disparity map obtained by parallax information, road surface, white line and fence can be detected easily, thus detection comprises the target such as pedestrian and vehicle, to carry out Based Intelligent Control based on testing result to vehicle.Visible, acquisition robust and accurately disparity map play an important role in stereoscopic vision.
Usually, Stereo Matching Technology can be divided into two classes, and a class is the algorithm based on pixel, another kind of be based on segmentation algorithm.Algorithm based on pixel considers separately each pixel.As shown in Figure 1, the algorithm based on pixel is that pixel P and Q finds respective pixel separately, and in fact, P and Q is all positioned at region, road surface, meets certain particular kind of relationship between their parallax value.Algorithm based on pixel needs the processing time grown very much.
Another conventional algorithm is the algorithm based on segmentation.Iamge Segmentation is the basic module of the algorithm based on segmentation, and the basic thought of this algorithm considers together all pixels in the same area block, namely utilizes the parallax value of valid pixel in region unit to estimate the parallax distribution of each region unit.Fig. 2 is typically based on the system chart of the Stereo Matching Algorithm of segmentation, this algorithm comprises to be split reference picture, obtain initial parallax image according to reference picture and target image calculating initial parallax value, parallax value is calculated to the region unit of each segmentation thus utilizes the parallax value renewal initial parallax image calculated to obtain anaglyph.
Such as, the US Patent No. 7330593B2 being entitled as " Segment based image matching method and system " discloses a kind of image matching method, wherein by chromatic information, reference picture is split, generate a width initial parallax image by basic sectional perspective matching algorithm, adopt each region of planar fit method to segmentation to carry out modeling and utilize belief propagation method to generate optimum disparity map.
Algorithm based on segmentation largely solves based on the problem existing for the algorithm of pixel: the processing time is long.But, bring a new problem based on the algorithm of segmentation: the parallax value that there is gross error in some regions.For region, road surface, as shown in Figure 3, the disparity map according to the region, road surface of the typical acquisition of the algorithm based on segmentation is tilted to the right, and there is mistake.According to " A Complete U-V-Disparity Study for Stereovision Based3D Driving Environment Analysis " Zhencheng Hu, Francisco Lamosa and Keiichi Uchimura, Department of Computer Science, Kumamoto University, Japan, Graduate School of Science and Technology, Kumamoto University, Japan, because for region, road surface, depth value on every bar horizontal line is constant, so should be made up of a lot of horizontal line for the correct disparity map in region, road surface.
As can be seen here, typically based on some blindness of Stereo Matching Algorithm of segmentation, too rely on initial parallax image, the disparity map obtained may be not accurate enough.As shown in Figure 4, if there are many especially noise spots in initial parallax image, the disparity map of mistake will be produced.Therefore, need to provide the method and system that can configure the parallax value in disparity map more accurately.
Summary of the invention
Consider above problem, present applicant proposes the disparity map collocation method based on classification and device, it utilizes the method for robust more to obtain dense disparity map more accurately.
According to an aspect of the present invention, provide the method for configuration parallax value, the method can comprise: obtain reference picture; The reference picture of acquisition is divided into multiple region; Classify to splitting the multiple regions obtained; Based on the pixel distribution parallax value that classification results is in each region.
Alternatively, the step classified in the multiple regions obtained segmentation can comprise: the parallax distributed model setting up Different Plane type; The each region obtained for segmentation calculates the parameter corresponding with the parallax distributed model of each plane type; Parameters based on each region determines the plane type belonging to this region.
Alternatively, the step of distributing parallax value for the pixel in each region based on classification results can comprise: according to the plane type belonging to each region, utilize the parallax distributed model corresponding with this plane type to calculate the parallax value of the pixel in this region.
Alternatively, the parallax distributed model setting up Different Plane type can comprise: set up the first parallax distributed model corresponding with the first plane type; Set up the second parallax distributed model corresponding with the second plane type; Set up the three parallax distributed model corresponding with the 3rd plane type.Can comprise for splitting each region calculating parameter obtained: first parameter corresponding with the first parallax distributed model, second parameter corresponding with the second parallax distributed model and three parameter corresponding with the 3rd parallax distributed model are calculated respectively to each region.
Alternatively, parameters based on each region determines that the step of the plane type belonging to this region can comprise: if the first parameter of arbitrary region is less than predetermined threshold and this first parameter is fully less than second parameter in this region, then determine that this region belongs to the first plane type; If second parameter in this region is less than this predetermined threshold and this second parameter is fully less than this first parameter, then determine that this region belongs to the second plane type; Otherwise, determine that this region belongs to the 3rd plane type.Can based on the 3rd this predetermined threshold of optimum configurations in this region.
Alternatively, the method can also comprise: obtain the initial parallax figure corresponding with this reference picture.The step calculating first parameter corresponding with the first parallax distributed model can comprise: scan every one-row pixels in each region to obtain the histogram of effective parallax value of every one-row pixels from described initial parallax figure; Obtain the histogrammic peak value of effective parallax value of every one-row pixels; According to the first parallax distributed model, the point utilizing the ordinate of every one-row pixels and the initial parallax value corresponding with this histogram peak to be formed is to the first parallax distribution and expression formula simulating this region; According to the first parallax distribution and expression formula in this region, calculate first parameter in this region.Effective parallax value be greater than zero parallax value
Alternatively, the step calculating second parameter corresponding with the second parallax distributed model can comprise: scan each row pixel in each region to obtain the histogram of effective parallax value of each row pixel from described initial parallax figure; Obtain the histogrammic peak value of effective parallax value of each row pixel; According to the second parallax distributed model, utilize the point of the horizontal ordinate of each row pixel and the parallax value corresponding with histogram peak formation to the second parallax distribution and expression formula simulating this region; According to the second parallax distribution and expression formula in this region, calculate second parameter in this region.
Alternatively, the step calculating three parameter corresponding with the 3rd parallax distributed model can comprise: scan all pixels in each region to obtain effective parallax value of all pixels from described initial parallax figure; According to the 3rd parallax distributed model, the horizontal ordinate of all pixels with effective parallax value, ordinate and corresponding effectively parallax value is utilized in each region to simulate the 3rd parallax distribution and expression formula in this region, according to the 3rd parallax distribution and expression formula in this region, calculate the 3rd parameter in this region.
Alternatively, according to the plane type belonging to each region, the step utilizing the parallax distributed model corresponding with this plane type to calculate the parallax value of the pixel in this region can comprise: if arbitrary region is classified as the first plane type, the parallax value of the pixel in this region is then calculated according to the first parallax distribution and expression formula in this region, as the parallax value distributing to described pixel, if this region is classified as the second plane type, the parallax value of the pixel in this region is then calculated according to the second parallax distribution and expression formula in this region, as the parallax value distributing to described pixel, if this region is classified as the 3rd plane type, the parallax value of the pixel in this region is then calculated according to the 3rd parallax distribution and expression formula in this region, as the parallax value distributing to described pixel.
According to the abovementioned embodiments of the present invention, be divided into multiple region with reference to image and they are classified, and adopt different parallax value collocation methods according to different classifications, thus parallax value collocation method corresponding to this plane type can be selected adaptively to configure the parallax value of pixel in this region unit, obtain parallax value more accurately.
According to a further aspect in the invention, provide the system of configuration parallax value, this system can comprise: acquiring unit, is configured to obtain reference picture; Cutting unit, is configured to the reference picture of acquisition to be divided into multiple region; Taxon, classify in the multiple regions be configured to cutting unit is split; Allocation units, being configured to based on the classification results of taxon is that pixel in each region distributes parallax value.
Alternatively, this taxon can be classified to multiple region by following: the parallax distributed model setting up Different Plane type; The each region obtained for segmentation calculates the parameter corresponding with the parallax distributed model of each plane type; Parameters based on each region determines the plane type belonging to this region.
Alternatively, these allocation units can by following come be that pixel in each region distributes parallax value: according to the plane type belonging to each region, utilize the parallax distributed model corresponding with this plane type to calculate the parallax value of the pixel in this region.
Typically based on the Stereo Matching Algorithm too blindness of segmentation, initial parallax image is too relied on.But, in method and system according to an embodiment of the invention, by correspondingly calculating parallax value to the classification of territorial classification also and belonging to region, known scene information is incorporated in the middle of disparity computation process, therefore method robust more according to an embodiment of the invention, the anaglyph obtained is more accurate.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the Stereo Matching Algorithm based on pixel.
Fig. 2 is the schematic diagram of the Stereo Matching Algorithm based on segmentation.
Fig. 3 illustrates region, road surface and the corresponding disparity map by obtaining based on the method for segmentation.
Fig. 4 illustrates in initial parallax figure the disparity map obtained when there is too much noise spot.
Fig. 5 is the block diagram of the applicable hardware system of embodiments of the invention.
Fig. 6 is the process flow diagram of the method for the parallax value of configuration pixel according to an embodiment of the invention.
Fig. 7 is the process flow diagram of the method to the territorial classification that segmentation obtains according to an embodiment of the invention.
Fig. 8 (a) is the schematic diagram of three kinds of plane types according to an embodiment of the invention and corresponding example to 8 (d).
Fig. 9 is the process flow diagram of the method for calculating first parameter according to an embodiment of the invention.
Figure 10 is the schematic diagram calculating the first parameter based on the first parallax distributed model according to an embodiment of the invention.
Figure 11 is the process flow diagram of the method for calculating second parameter according to an embodiment of the invention.
Figure 12 is the schematic diagram calculating the second parameter based on the second parallax distributed model according to an embodiment of the invention.
Figure 13 is the process flow diagram of the method for calculating according to an embodiment of the invention 3rd parameter.
Figure 14 (a) and 14 (b) illustrate by the typical disparity map obtained based on the Stereo Matching Algorithm of segmentation.
Figure 15 (a) and 14 (b) illustrate the disparity map obtained according to one embodiment of present invention.
Figure 16 illustrates the block diagram of the system of configuration parallax value according to an embodiment of the invention.
Embodiment
Present by detail with reference to specific embodiments of the invention, in the accompanying drawings exemplified with example of the present invention.Although the present invention will be described in conjunction with specific embodiments, will understand, and be not intended to limit the invention to disclosed specific embodiment.It should be noted that method step described herein can be arranged by any functional block or function realize, and any functional block or function are arranged and can be implemented as physical entity or logic entity or both combinations.
As mentioned above, the method typically based on segmentation is attributed to blindly, too relies on for initial parallax image, but, if known scene be introduced in disparity computation process, then will be more accurate to the configuration of parallax value.According to the present invention, classified by the region unit obtained segmentation reference picture, based on region unit classification and correspondingly configure parallax value, thus obtain disparity map more accurately, below describe in detail.
First with reference to figure 5, the block diagram being applied to the hardware system 100 of embodiments of the invention is described.
Hardware system 100 comprises: stereoscopic camera 110, for taking two or more images from two or more viewpoints; Demoder 120, for extracting image information, such as half-tone information, the chromatic information etc. relevant to pixel in the image taken from stereoscopic camera 110; Digital signal processor 130, carries out digital signal processing for the various information exported demoder 120; Storer 140, is coupled with digital signal processor 130, provides data for storing by the data of digital signal processor processes 130 or to digital signal processor 130; And other modules 150 relevant to application, the result processed for utilizing digital signal processor 130 carries out further action.
Method and system can be implemented in the digital signal processor 130 shown in Fig. 5 according to an embodiment of the invention.Certainly, this is only an example, and its implementation is not limited thereto.
Process flow diagram below with reference to Fig. 6 describes the method for the parallax value of configuration pixel according to an embodiment of the invention.
As shown in Figure 6, the method 600 configuring parallax value can comprise:
Step 601, obtains reference picture;
Step 602, is divided into multiple region by the reference picture of acquisition;
Step 603, classifies to splitting the multiple regions obtained;
Step 604 is the pixel distribution parallax value in each region based on classification results.
In step 601, the reference picture of target scene can be obtained by any known method.Such as, by binocular camera, left-eye image and the eye image of target scene can be obtained, can get wherein any one, such as left-eye image as with reference to image, then eye image is as target image, or vice versa.Certainly, this is only example, and the method obtaining reference picture is not limited thereto.
Alternatively, in one embodiment, input reference picture and target image in step 601, and obtain corresponding initial parallax image thus.Or, in another embodiment, input reference picture and corresponding initial parallax image.Can obtain initial parallax figure by method as known in the art, such as Stereo Matching Algorithm, certainly, the method obtaining initial parallax figure is not limited thereto.
In step 602, segmentation reference picture.Iamge Segmentation is process piece image being divided into some regions, and the pixel in each region has some common or similar features.Feature is vital for image segmentation algorithm, if by colored (gray scale) information as feature, then the pixel in each region split has almost identical colour (gray scale) value.Algorithm based on average drifting is image segmentation algorithm conventional at present, and colored (gray scale) feature is conventional feature.Such as, at " Mean Shift; Mode Seeking; and Clustering " IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE of Yizong Cheng, in VOL.17, NO.8, AUGUST1995, describe mean shift algorithm and application thereof in detail, it is cited in full and is incorporated herein by reference.Illustrate the process of Iamge Segmentation below using colored (gray scale) feature as Iamge Segmentation feature, but choosing of Iamge Segmentation feature is not limited to color property or gray feature, also can adopt other features as known in the art.
First, definition Iamge Segmentation feature, consider colour (gray scale) information of pixel, this Iamge Segmentation feature can be expressed as: (I r, I g, I b) or I, wherein, (I r, I g, I b) and I be respectively R, G, B value of color and the gray-scale value of pixel (x, y).Then, in feature space by average drifting process be each pixel find convergent pathway.Convergent pathway is a point in feature space, and after drifting to this point, eigenwert no longer changes.According to the convergent pathway obtained thus, cluster is carried out to all pixels, thus obtain multiple pixel region meeting predetermined characteristic (such as, there is common or similar feature).In fig. 2, the segmentation result that reference picture is split into multiple region is shown.
Certainly, any basic image segmentation algorithm can be adopted to split reference picture., suppose that actual scene is made up of many planes here, each region unit that segmentation obtains corresponds to a plane in actual scene.Through this segmentation after, export the information relevant to segmentation result, this information can include but not limited to following in one or more: the number of pixel, the marking image etc. of whole reference picture in the number in the region of segmentation, each region.These information relevant to segmentation result will use in subsequent step.
In step 603, the regional classification obtained will be split.Process flow diagram with reference to figure 7 describes the exemplary method 700 to splitting the territorial classification obtained according to an embodiment of the invention.
As shown in Figure 7, sorting technique 700 can comprise:
Step 701, sets up the parallax distributed model of Different Plane type;
Step 702, each region obtained for segmentation calculates the parameter corresponding with the parallax distributed model of each plane type;
Step 703, the parameters based on each region obtained in a step 702 determines the plane type belonging to this region.
Usually, the plane in actual scene can be divided into three classes.At world coordinate system (X as shown in Figure 8 w, Y w, Z w) when, this three classes plane respectively: around X wthe first kind plane that axle rotates, as shown in Fig. 8 (a), all planes belonging to the first kind have identical parallax distribution pattern; Around Y wthe Equations of The Second Kind plane that axle rotates, as shown in Fig. 8 (b), all planes belonging to Equations of The Second Kind have identical parallax distribution pattern; Around Z wthe 3rd class plane that axle rotates, as shown in Fig. 8 (c), all planes belonging to the 3rd class have identical parallax distribution pattern, (the x in this coordinate system wherein in this Fig. 8 r, y r) be the image coordinate system of reference picture, (x l, y l) be the image coordinate system of target image.
Fig. 8 (d) gives the example of plane in actual scene of three kinds.For the image shown in Fig. 8 (d), its image coordinate system is set up according to mode below: the pixel in the image upper left corner is true origin, and level is to the right X-axis positive dirction, is Y-axis positive dirction straight down; Corresponding world coordinates is: X waxle positive dirction level to the right, Y waxle positive dirction straight down, Z wthe vertical paper of axle positive dirction inwards.As shown in Fig. 8 (d), the front in road surface, vehicle postnotum plane, building belongs to first kind plane, and the side in building belongs to Equations of The Second Kind plane, and the tilt-wall wall that walkway side stands belongs to the 3rd class plane.Be only schematic for indicating the rectangle frame of three kinds of plane types in Fig. 8 (d), be not the actual result split reference picture, certainly, the region after segmentation is not limited to rectangle.
Below be described in step 701 implementation of a kind of example setting up parallax distributed model.At " the A Complete U-V-Disparity Study for Stereovision Based 3D Driving Environment Analysis " of the people such as Zhencheng Hu, Department of Computer Science, Kumamoto University, Japan, Graduate School of Science and Technology, Kumamoto University, in Japan, give plane equation and the parallax distribution equation of these three kinds of plane types, it is cited in full and is incorporated herein by reference.For first kind plane, its plane equation Z wand parallax distribution equation d (x, y) is represented by following formula (1) and formula (2) respectively:
Z w=c 1y w+ c 2(c 2≠ 0) formula (1)
d ( x , y ) = Bf c 2 - B c 1 c 2 ( y - y 0 ) Formula (2)
Wherein, (x 0, y 0) be the coordinate of image center, B and f is camera parameters: B is baseline distance, represents the distance between two video camera photocentres in binocular vision system; F is focal length of camera length.
A kind of special plane may be there is, it had both belonged to first kind plane, also belonged to Equations of The Second Kind plane, according to the theory of the people such as Zhencheng Hu, the plane equation Yw of this plane and parallax distribution equation d (x, y) are represented by following formula (3) and formula (4) respectively:
d ( x , y ) = B c ( y - y 0 ) Formula (3)
Y w=c (c ≠ 0) formula (4)
Here, this plane is included into first kind plane.Thus parallax distribution equation d (x, y) of this first kind plane can represent with general expression (5) below:
D (x, y)=C 1y+C 2formula (5)
According to the theory of the people such as Zhencheng Hu, the plane equation Z of Equations of The Second Kind plane wand parallax distribution equation d (x, y) is represented by following formula (6) and formula (7) respectively:
Z w=c 1x w+ c 2(c 1≠ 0, c 2≠ 0) formula (6)
d ( x , y ) = 2 B c 1 B c 1 - 2 c 2 ( x - x 0 ) - 2 Bf B c 1 - 2 c 2 Formula (7)
Wherein, the same, (x 0, y 0) be the coordinate of image center, B and f is camera parameters: B is baseline distance, represents the distance between two video camera photocentres in binocular vision system; F is focal length of camera length.
Also may there is a kind of special plane, it had both belonged to Equations of The Second Kind plane, also belonged to the 3rd class plane, according to the theory of the people such as Zhencheng Hu, and the plane equation X of this plane wand parallax distribution equation d (x, y) is represented by following formula (8) and formula (9) respectively:
X w=c (c ≠ 0) formula (8)
d ( x , y ) = 2 B 2 c + B ( x - x 0 ) Formula (9)
Here, this plane is included into Equations of The Second Kind plane.Parallax distribution equation d (x, y) of this Equations of The Second Kind plane can represent with general expression (10) below:
D (x, y)=C 1x+C 2(C 1≠ 0) formula (10)
According to the theory of the people such as Zhencheng Hu, the plane equation Y of the 3rd class plane wand parallax distribution equation d (x, y) is represented by following formula (11) and formula (12) respectively:
Y w=c 1x w+ c 2(c 1≠ 0, c 2≠ 0) formula (11)
d ( x , y ) = 2 B c 1 B c 1 - 2 c 2 ( x - x 0 ) + 2 B B c 1 - 2 c 2 ( y - y 0 ) Formula (12)
Wherein, the same, (x 0, y 0) be the coordinate of image center, B and f is camera parameters: B is baseline distance, represents the distance between two video camera photocentres in binocular vision system; F is focal length of camera length.
Thus parallax distribution equation d (x, y) of the 3rd class plane can represent with general expression (13) below:
D (x, y)=C 1x+C 2y+C 3(C 1≠ 0, C 2≠ 0) formula (13)
Next, in step 702, based on the parallax distribution equation d (x of first, second, and third plane type set up in step 701, y) general expression (5), (10) and (13), the i.e. parallax distributed model of first, second, and third class plane, calculates the parameter of the parallax distributed model corresponding to each classification respectively to each region splitting the multiple regions obtained in step 602.
Particularly, in one embodiment, for splitting any one region S obtained, its parameter (hereinafter referred to as the first parameter) is calculated respectively according to the parallax distributed model (hereinafter referred to as the first parallax distributed model) of first kind plane, calculate its parameter (hereinafter referred to as the second parameter) according to the parallax distributed model (hereinafter referred to as the second parallax distributed model) of Equations of The Second Kind plane, calculate its parameter (hereinafter referred to as the 3rd parameter) according to the parallax distributed model (hereinafter referred to as the 3rd parallax distributed model) of the 3rd class plane.
Such as, if region S belongs to first kind plane, so the distribution of its parallax meets first parallax distributed model: d (x, y)=C 1y+C 2, namely every a line of S has identical parallax value, and in practice, every a line of S not necessarily has strictly unique initial parallax value, therefore, a histogram can be added up, think that parallax value corresponding to histogram peak is unique correct parallax value of this row.
Suppose that the height of this region S is h, namely have the capable pixel of h, then based on the parallax value of each pixel in the initial parallax figure obtained in step 601, calculate first parameter of S according to the first parallax distributed model (formula (5)).
Particularly, the exemplary method 900 of calculating first parameter according to an embodiment of the invention is described with reference to the process flow diagram of figure 9.In this embodiment, based on the parallax value of each pixel in the initial parallax figure obtained in step 601, calculate this first parameter according to the first parallax distributed model.
As shown in Figure 9, this first calculation method of parameters 900 can comprise:
Step 901, every one-row pixels of scanning area S is to obtain the histogram H of effective parallax value (being greater than the parallax value of zero) of often row pixel from initial parallax figure i,d, wherein i represents line number, i=0,1 ..., h-1, d=1,2 ..., d max, d maxbeing the maximum disparity value that possible occur, is a fixed value.
In step 902, obtain the histogrammic peak value of effective parallax value of every a line.Meanwhile, the index corresponding to this peak value can be collected, the parallax value that namely occurrence number is maximum:
d i = arg max 1 ≤ d ≤ d max H i , d , i = 0,1,2 , . . . , h - 1 Formula (14)
In step 903, according to the first parallax distributed model, utilize the ordinate y of every one-row pixels iand the parallax value d corresponding with histogram peak ithe point formed is to (y i, d i) simulate the first parallax distribution and expression formula in this region, wherein i=0,1 ..., h-1.Such as, according to above-described first parallax distributed model d (x, y)=C 1y+C 2, the equation of straight line can be simulated, as the first parallax distribution and expression formula of this region S.Such as, the straight-line equation of matching can as shown in Equation (15):
D (x, y)=ay+b formula (15)
Wherein pass through point (y i, d i) matching, can parameter a be determined, the value of b.
In step 904, the first parallax distribution and expression formula based on this region S obtained in step 903 calculates the first parameter.Such as, suppose in the S of region, there is n available point (point that initial parallax value is greater than zero) (X k, Y k, d k), wherein d k>0, k=0,1 ..., n-1, be then defined as shown in following formula (16) by the first parameter E1 of region S:
E 1 = Σ k = 0 n - 1 [ d k - ( a Y k + b ) ] 2 Formula (16)
Figure 10 shows the schematic diagram calculating the first parameter based on the first parallax distributed model according to an embodiment of the invention.If region S belongs to first kind plane, then according to the first parameter E that above expression formula (16) calculates the effective pixel points in the S of region 1value should be very little.
It should be noted that in this embodiment, not all in processing region S pixels, but only process the pixel that initial parallax value is greater than zero, and think that these pixels are effective pixels.
For the second parameter, similarly, based on the parallax value of each pixel in the initial parallax figure obtained in step 601, according to the second parallax distributed model (formula (10)), the second parameter is calculated for each region.Suppose that the width of this region S is l, namely have l row pixel, the process flow diagram of Figure 11 shows the exemplary method 1100 of calculating second parameter according to an embodiment of the invention.Method 900 shown in the ultimate principle of method 1100 and Fig. 9 is similar, just in this embodiment by column scan region S.
Such as, as shown in figure 11, the second calculation method of parameters 1100 can comprise:
Step 1101, each the row pixel scanning each region S also adds up the histogram H of its effective parallax value (being greater than the parallax value of zero) respectively j,d, wherein j represents columns, and j=0,1,2 ..., l-1;
Step 1102, finds the histogrammic peak value of each row pixel.In addition, the index corresponding with this peak value can be collected, the parallax value that namely occurrence number is maximum:
d j = arg max 1 ≤ d ≤ d max H j , d , j = 0,1,2 , . . . , l - 1 Formula (17)
Step 1103, according to the second parallax distributed model, utilizes the horizontal ordinate x of each row pixel jthe parallax value d corresponding with histogram peak jthe point formed is to (x j, d j) simulate the second parallax distribution and expression formula, wherein j=0,1,2 ..., l-1.Such as, according to above-described second parallax distributed model d (x, y)=C 1x+C 2(C 1≠ 0), can simulate straight line equation, as the second parallax distribution and expression formula of region S, the straight-line equation obtained by matching can as shown in Equation (18):
D (x, y)=ex+f formula (18)
Wherein pass through point (x j, d j) matching, can parameter e be determined, the value of f.
Step 1104, the second parallax distribution and expression formula based on this region S obtained in step 1103 calculates the second parameter.Suppose in the S of region, there is n available point (point that initial parallax value is greater than zero) (X k, Y k, d k), wherein d k>0, k=0,1 ..., n-1, thus by the second parameter E of region S 2be defined as shown in following formula (19):
E 2 = Σ k = 0 n - 1 [ d k - ( e X k + f ) ] 2 Formula (19)
Figure 12 shows the schematic diagram calculating the second parameter based on the second parallax distributed model.If region unit S belongs to Equations of The Second Kind plane, then according to the second parameter E that above expression formula (19) calculates the effective pixel points in the S of region 2value should be very little.
Equally, in this embodiment, not all in processing region S pixels, but the pixel only processing that initial parallax value is greater than zero, and think that these pixels are effective pixels.
For the 3rd parameter, based on the parallax value of each pixel in the initial parallax figure obtained in step 601, according to the 3rd parallax distributed model (formula (13)), the 3rd parameter is calculated for each region.Such as, suppose in region unit S, there is n available point (point that initial parallax value is greater than zero) (X k, Y k, d k), wherein d k>0, k=0,1 ..., n-1.The process flow diagram of Figure 13 shows the exemplary method 1300 of calculating according to an embodiment of the invention 3rd parameter.
As shown in figure 13, the 3rd calculation method of parameters 1300 can comprise:
Step 1301, all pixels in scanning area S, to obtain effective parallax value of each pixel;
Step 1302, according to the 3rd parallax distributed model, utilizes in the S of region the horizontal ordinate X of all pixels with effective parallax value k, ordinate Y kwith corresponding parallax value d kthe 3rd parallax distribution and expression formula in this region of matching.Such as, according to parallax distributed model d (x, the y)=C of the 3rd class plane 1x+C 2y+C 3(C 1≠ 0, C 2≠ 0) (formula (13)), use these points above-mentioned to utilize least square fitting plane, as the 3rd parallax distribution and expression formula in this region, this process is equal to the expression formula (20) minimized below:
S = Σ k = 0 n - 1 ( a 1 * X k + a 2 * Y k + a 3 - d k ) 2 Formula (20)
For minimizing S, should make set up, wherein p=1,2,3, this equates expression formula (21) below and (22) establishment:
Σ 2 ( a 1 * X k + a 2 * Y k + a 3 - d k ) * X k = 0 Σ 2 ( a 1 * X k + a 2 * Y k + a 3 - d k ) * Y k = 0 Σ 2 ( a 1 * X k + a 2 * Y k + a 3 - d k ) = 0 Formula (21)
Σ X k 2 Σ X K Y k Σ X k Σ X k Y k Σ Y k 2 Σ Y k Σ X k Σ Y k n a 1 a 2 a 3 = Σ X k d k Σ Y k d k Σ d k Formula (22)
Wherein, a 1, a 2and a 3obtained by matrix operation, and then obtain following plane equation (23), the 3rd parallax distribution and expression formula as region S:
D (x, y)=a 1x+a 2y+a 3formula (23)
Step 1303, the 3rd parallax distribution and expression formula based on this region S obtained in step 1302 calculates the 3rd parameter.By the 3rd parameter E of region S 3be defined as follows expression formula (24):
E 3 = Σ k = 0 n - 1 [ d k - ( a 1 X k + a 2 Y k + a 3 ) ] 2 Formula (24)
In like manner, if region unit S belongs to the 3rd class plane, then according to the 3rd parameter E that above expression formula (24) calculates the effective pixel points in the S of region 3value should be very little.And, equally in this embodiment, not all in processing region S pixels, but the pixel only processing that initial parallax value is greater than zero, and think that these pixels are effective pixels.
So far, the first, second, and third parameter E of the arbitrary region S in reference picture is estimated based on the first, second, and third parallax distributed model corresponding with the first kind, Equations of The Second Kind and the 3rd class plane 1, E 2, E 3.
Return Fig. 7, in step 703, based on the above-mentioned first, second, and third parameter E of the region S calculated in a step 702 1, E 2, E 3, determine the plane type belonging to the S of this region.
In one embodiment, a discriminant function is defined as follows:
Wherein, T and t is predetermined threshold, if x is less than threshold value T and x and y compares enough little, then the value of function G is 1, otherwise functional value is 0.
Then, according to mode below, region unit S is classified:
S ∈ class 1 if G ( E 1 , E 2 , T , t ) = = 1 class 2 if G ( E 2 , E 1 , T , t ) = = 1 class 3 if G ( E 1 , E 2 , T , t ) + G ( E 2 , E 1 , T , t ) = = 0
According to the character of least square method, E 3=min (E 1, E 2, E 3), therefore, we directly cannot utilize E 3classify, but only utilize E 1and E 2classify, and based on E 3the value of T is set.Such as, in one embodiment, T=1.2E can be got 3, and t is taken as 0.8, that is, if then think that x and y compares enough little.Certainly, this is only used to illustrate, those skilled in the art can get any other suitable value to T and t completely according to actual needs.
Therefore, if E 1be less than predetermined threshold T, and E 1compare E 2enough little, then region S is categorized as first kind plane; If E 2be less than predetermined threshold T, and E 2compare E 1enough little, then region S is categorized as Equations of The Second Kind plane; If two conditions do not meet, then region unit S is categorized as the 3rd class plane above.
Thus, can based on the above-mentioned first, second, and third parameter E of the region S estimated 1, E 2, E 3, determine the plane type belonging to the S of this region.
Should note, if not accurate enough to the segmentation of reference picture in step 602, a part in the S of region belongs to a kind of plane type, and a part belongs to another kind of plane type in addition, such as, a part belongs to first kind plane, a part belongs to Equations of The Second Kind plane in addition, and so when classifying to region S, it neither meets the condition of first kind plane, the condition of Equations of The Second Kind plane is not met, so region S will be classified into the 3rd class plane yet.In like manner, if region S comprises three kinds of all plane types, region S also will be classified into the 3rd class plane.That is, hold by the 3rd class plane the region not meeting pre-defined rule, such as comprise the region of two kinds and above plane type.
Therefore, in order to classify to plane more exactly, in actual applications, when splitting reference picture, ensure that segmentation is in over-segmentation state, namely splitting the region unit obtained can not be excessive, ensures a plane in the corresponding actual scene of each region unit thus, avoids a region to comprise the situation generation of two kinds and above plane type as far as possible.This can by arranging segmentation time the pre-defined rule that adopts realize, such as, in one embodiment, can segmentation feature be arranged stricter, such as the pixel with identical color property is divided into a region.Certainly, this is not requirement.Those skilled in the art can suitably arrange segmentation feature according to actual needs.
Certainly, above disclosed embodiment is not limited to the method classified in the region of segmentation, just to example object and above embodiment is described so as those skilled in the art the present invention may be better understood, those skilled in the art can take any other sorting technique of spirit according to the invention in actual applications, also can with above open other parameter values differently arranged except first, second, and third parameter to carry out plane type judgement.
Returning Fig. 6, in step 604, is the pixel distribution parallax value in each region based on the classification results in step 603.Such as, can be correspondingly that pixel in each region distributes parallax value based on the plane type of determined region S.In one embodiment, after the classification results obtaining all regions in step 603, can plane type belonging to each region, utilize corresponding parallax distribution and expression formula to calculate parallax value.
If S is classified as first kind plane, then carry out the parallax value of the pixel in the S of zoning according to first parallax distribution and expression formula d (x, y)=ay+b of formula (15), wherein (x, y) ∈ S;
If S is classified as Equations of The Second Kind plane, then carry out the parallax value of the pixel in the S of zoning according to second parallax distribution and expression formula d (x, y)=ex+f of formula (18), wherein (x, y) ∈ S;
If S is classified as the 3rd class plane, then according to the 3rd parallax distribution and expression formula d (x, y)=a of formula (23) 1x+a 2y+a 3carry out the parallax value of the pixel in the S of zoning, wherein (x, y) ∈ S.
In this embodiment, all parameter a, b, e, f, a 1, a 2, a 3all determine in step 603.Particularly, in step 903, determine a, b, in step 1103, determine e, f, in step 1302, determine a 1, a 2and a 3.Therefore, for the arbitrary region S determining plane type, the parallax value of the pixel in this region can be calculated by the parallax distributed model corresponding with this plane type.
Thus, calculated parallax value can be distributed to the corresponding pixel points in each region of reference picture, to upgrade or to replace the corresponding initial parallax value in initial parallax figure, obtain the disparity map corresponding with this reference picture.
Due to the typical Stereo Matching Algorithm based on segmentation too blindness, too relying on initial parallax image, there is mistake in some region in the disparity map therefore usually obtained.But the method for configuration parallax value according to this embodiment of the invention, by being multiple region with reference to Iamge Segmentation and classifying to multiple region, correspondingly can configure the parallax value of the pixel in each region based on the classification in region.Because this embodiment of the present invention adopts different parallax value collocation methods based on the classification belonging to the region split to the region belonged to a different category, configure parallax value more accurately so more enough.
In one embodiment, by setting up the parallax distributed model of plane type, the parameter corresponding with the parallax distributed model of each plane type is estimated in each region obtained for segmentation, thus the plane type can determining belonging to this region based on the parameters in each region.Because classify to splitting the region unit obtained according to parallax distributed model, utilize the distributed model of parallax accurately obtained to configure parallax value, therefore, the disparity map obtained is more accurate.
Figure 14 shows by the typical disparity map obtained based on the Stereo Matching Algorithm of segmentation, and Figure 15 illustrates the disparity map by obtaining according to the method for this embodiment.As can be seen from Figure 14, adopt the mistake that in the disparity map typically obtained based on the Stereo Matching Algorithm of segmentation, some region exists, and these mistakes are corrected in the disparity map obtained according to the method for the present embodiment.For region, road surface, correct disparity map should be made up of a lot of horizontal line, contrast Figure 14 (b) and the disparity map shown in Figure 15 (b), Figure 14 (b) are wrong, and the disparity map shown in Figure 15 (b) is correct.The new disparity map obtained according to the method for the present embodiment more accurately and robust, proves according to the parallax value collocation method of the present embodiment very effective.
According to another embodiment of the present invention, the system of configuration parallax value is provided.Figure 16 shows the block diagram of the system 1600 of configuration parallax value according to another embodiment of the present invention.As shown in figure 16, system 1600 can comprise: acquiring unit 1610, is configured to obtain reference picture; Cutting unit 1620, being configured to reference to Iamge Segmentation is multiple region; Taxon 1630, classify in the multiple regions be configured to cutting unit 1620 is split; Allocation units 1640, being configured to based on the classification results of taxon 1630 is that pixel in each region distributes parallax value.
In one embodiment, this taxon 1630 can be classified to multiple region by following: the parallax distributed model setting up plane type; The each region obtained for segmentation calculates the parameter corresponding with the parallax distributed model of each plane type; Parameters based on each region determines the plane type belonging to this region.
In one embodiment, these allocation units 1640 can by following come be that pixel in each region distributes parallax value: according to the plane type belonging to each region, utilize the parallax distributed model corresponding with this plane type to calculate the parallax value of the pixel in this region.
Thus, in method and system according to an embodiment of the invention, correspondingly parallax value is calculated by the classification to territorial classification and belonging to region, known scene information is incorporated in the middle of disparity computation process, therefore method robust more according to an embodiment of the invention, the anaglyph obtained is more accurate.
The example of the block scheme of the device related in the disclosure, device, equipment, system only illustratively property, and be not intended to require or hint must carry out connecting according to the mode shown in block scheme, arranges, configure.As the skilled person will recognize, can connect by any-mode, arrange, configure these devices, device, equipment, system.Such as " comprise ", " comprising ", " having " etc. word be open vocabulary, refer to " including but not limited to ", and can use with its exchange.Here used vocabulary "or" and " with " refer to vocabulary "and/or", and can to use with its exchange, unless it is not like this that context clearly indicates.Here used vocabulary " such as " refer to phrase " such as, but not limited to ", and can to use with its exchange.
Flow chart of steps in the disclosure and above method only describe the example of illustratively property, and are not intended to require or imply the step must carrying out each embodiment according to the order provided.As the skilled person will recognize, the order of the step in above embodiment can be carried out in any order.Such as the word of " thereafter ", " then ", " next " etc. is not intended to limit the order of step; The description of these words only for guiding reader to read over these methods.In addition, such as use article " ", " one " or " being somebody's turn to do " be not interpreted as this key element to be restricted to odd number for any quoting of the key element of odd number.
The above description of disclosed aspect is provided to make to enable any technician of this area or use the present invention.Be very apparent to those skilled in the art to the various amendments of these aspects, and can be applied in other in General Principle of this definition and do not depart from the scope of the present invention.Therefore, the present invention be not intended to be limited to shown in this in, but according to consistent with principle disclosed herein and novel feature most wide region.

Claims (10)

1. configure a method for parallax value, comprising:
Obtain reference picture;
The reference picture of acquisition is divided into multiple region;
Classify to splitting the multiple regions obtained;
Based on the pixel distribution parallax value that classification results is in each region.
2. the method for claim 1, the step classified in the multiple regions wherein obtained segmentation comprises:
Set up the parallax distributed model of Different Plane type;
The each region obtained for segmentation calculates the parameter corresponding with the parallax distributed model of each plane type;
Parameters based on each region determines the plane type belonging to this region.
3. method as claimed in claim 2, the step of wherein distributing parallax value based on classification results for the pixel in each region comprises: according to the plane type belonging to each region, utilize the parallax distributed model corresponding with this plane type to calculate the parallax value of the pixel in this region.
4. method as claimed in claim 2 or claim 3, the parallax distributed model wherein setting up Different Plane type comprises:
Set up the first parallax distributed model corresponding with the first plane type;
Set up the second parallax distributed model corresponding with the second plane type;
Set up the three parallax distributed model corresponding with the 3rd plane type,
Wherein, the step that each region obtained for segmentation calculates the parameter corresponding with the parallax distributed model of each plane type comprises: calculate first parameter corresponding with the first parallax distributed model, second parameter corresponding with the second parallax distributed model and three parameter corresponding with the 3rd parallax distributed model respectively to each region.
5. method as claimed in claim 4, wherein, the step of the plane type that the parameters based on each region is determined belonging to this region comprises:
If the first parameter of arbitrary region is less than predetermined threshold and this first parameter is fully less than second parameter in this region, then determine that this region belongs to the first plane type;
If second parameter in this region is less than this predetermined threshold and this second parameter is fully less than this first parameter, then determine that this region belongs to the second plane type;
Otherwise, determine that this region belongs to the 3rd plane type,
Wherein, based on the 3rd this predetermined threshold of optimum configurations in this region.
6. method as claimed in claim 5, also comprises: obtain the initial parallax figure corresponding with this reference picture,
Wherein, the step calculating first parameter corresponding with the first parallax distributed model comprises:
Scan every one-row pixels in each region to obtain the histogram of effective parallax value of every one-row pixels from described initial parallax figure;
Obtain the histogrammic peak value of effective parallax value of every one-row pixels;
According to the first parallax distributed model, the point utilizing the ordinate of every one-row pixels and the initial parallax value corresponding with this histogram peak to be formed is to the first parallax distribution and expression formula simulating this region;
According to the first parallax distribution and expression formula in this region, calculate first parameter in this region,
Wherein, effective parallax value be greater than zero parallax value.
7. method as claimed in claim 6, wherein, the step calculating second parameter corresponding with the second parallax distributed model comprises:
Scan each row pixel in each region to obtain the histogram of effective parallax value of each row pixel from described initial parallax figure;
Obtain the histogrammic peak value of effective parallax value of each row pixel;
According to the second parallax distributed model, utilize the point of the horizontal ordinate of each row pixel and the parallax value corresponding with histogram peak formation to the second parallax distribution and expression formula simulating this region;
According to the second parallax distribution and expression formula in this region, calculate second parameter in this region.
8. method as claimed in claim 7, wherein, the step calculating three parameter corresponding with the 3rd parallax distributed model comprises:
Scan all pixels in each region to obtain effective parallax value of all pixels from described initial parallax figure;
According to the 3rd parallax distributed model, the horizontal ordinate of all pixels with effective parallax value, ordinate and corresponding effectively parallax value is utilized in each region to simulate the 3rd parallax distribution and expression formula in this region,
According to the 3rd parallax distribution and expression formula in this region, calculate the 3rd parameter in this region.
9. method as claimed in claim 8, according to the plane type belonging to each region, the step utilizing the parallax distributed model corresponding with this plane type to calculate the parallax value of the pixel in this region comprises:
If arbitrary region is classified as the first plane type, then calculate the parallax value of the pixel in this region according to the first parallax distribution and expression formula in this region, as the parallax value distributing to described pixel,
If this region is classified as the second plane type, then calculate the parallax value of the pixel in this region according to the second parallax distribution and expression formula in this region, as the parallax value distributing to described pixel,
If this region is classified as the 3rd plane type, then calculate the parallax value of the pixel in this region according to the 3rd parallax distribution and expression formula in this region, as the parallax value distributing to described pixel.
10. configure a system for parallax value, comprising:
Acquiring unit, is configured to obtain reference picture;
Cutting unit, is configured to the reference picture of acquisition to be divided into multiple region;
Taxon, classify in the multiple regions be configured to cutting unit is split;
Allocation units, being configured to based on the classification results of taxon is that pixel in each region distributes parallax value.
CN201310233783.3A 2013-06-13 2013-06-13 Configure the method and system of parallax value Expired - Fee Related CN104240219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310233783.3A CN104240219B (en) 2013-06-13 2013-06-13 Configure the method and system of parallax value

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310233783.3A CN104240219B (en) 2013-06-13 2013-06-13 Configure the method and system of parallax value

Publications (2)

Publication Number Publication Date
CN104240219A true CN104240219A (en) 2014-12-24
CN104240219B CN104240219B (en) 2017-08-08

Family

ID=52228228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310233783.3A Expired - Fee Related CN104240219B (en) 2013-06-13 2013-06-13 Configure the method and system of parallax value

Country Status (1)

Country Link
CN (1) CN104240219B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724431A (en) * 2019-03-22 2020-09-29 北京地平线机器人技术研发有限公司 Disparity map obtaining method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286756A1 (en) * 2004-06-25 2005-12-29 Stmicroelectronics, Inc. Segment based image matching method and system
CN101262619A (en) * 2008-03-30 2008-09-10 深圳华为通信技术有限公司 Method and device for capturing view difference
CN102819843A (en) * 2012-08-08 2012-12-12 天津大学 Stereo image parallax estimation method based on boundary control belief propagation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286756A1 (en) * 2004-06-25 2005-12-29 Stmicroelectronics, Inc. Segment based image matching method and system
CN101262619A (en) * 2008-03-30 2008-09-10 深圳华为通信技术有限公司 Method and device for capturing view difference
CN102819843A (en) * 2012-08-08 2012-12-12 天津大学 Stereo image parallax estimation method based on boundary control belief propagation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
R LABAYRADE ET AL: "Real time obstacle detection in stereovision on non flat road geometry through disparity representation", 《INTELLIGENT VEHICLE SYMPOSIUM》 *
ZHENCHENG HU ET AL: "A complete U-V-disparity study for stereovision based 3D driving environment analysis", 《A COMPLETE U-V-DISPARITY STUDY FOR STEREOVISION BASED 3D DRIVING ENVIRONMENT ANALYSIS》 *
上官珺: "基于U-V 视差算法的障碍物识别技术研究", 《兰州工业高等专科学校学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724431A (en) * 2019-03-22 2020-09-29 北京地平线机器人技术研发有限公司 Disparity map obtaining method and device and electronic equipment
CN111724431B (en) * 2019-03-22 2023-08-08 北京地平线机器人技术研发有限公司 Parallax map obtaining method and device and electronic equipment

Also Published As

Publication number Publication date
CN104240219B (en) 2017-08-08

Similar Documents

Publication Publication Date Title
JP6131704B2 (en) Detection method for continuous road segment and detection device for continuous road segment
US9378424B2 (en) Method and device for detecting road region as well as method and device for detecting road line
US9311542B2 (en) Method and apparatus for detecting continuous road partition
CN102136136B (en) Luminosity insensitivity stereo matching method based on self-adapting Census conversion
CN105023010A (en) Face living body detection method and system
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN103810744A (en) Backfilling points in a point cloud
CN105740802A (en) Disparity map-based obstacle detection method and device as well as automobile driving assistance system
JP5164351B2 (en) Object detection apparatus and object detection method
CN105335955A (en) Object detection method and object detection apparatus
CN104166834A (en) Pavement detection method and pavement detection device
CN101610425A (en) A kind of method and apparatus of evaluating stereo image quality
CN111476242A (en) Laser point cloud semantic segmentation method and device
CN104331890B (en) A kind of global disparity method of estimation and system
CN103871042A (en) Method and device for detecting continuous type object in parallax direction based on disparity map
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
CN107689060A (en) Visual processing method, device and the equipment of view-based access control model processing of destination object
CN107808140A (en) A kind of monocular vision Road Recognition Algorithm based on image co-registration
CN103260043A (en) Binocular stereo image matching method and system based on learning
Saval-Calvo et al. Three-dimensional planar model estimation using multi-constraint knowledge based on k-means and RANSAC
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
CN104252707B (en) Method for checking object and device
CN104123715B (en) Configure the method and system of parallax value
CN104240219A (en) Method and system for allocating parallax values
Neverova et al. 2 1/2 D scene reconstruction of indoor scenes from single RGB-D images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170808

CF01 Termination of patent right due to non-payment of annual fee