CN107993233B - Pit area positioning method and device - Google Patents

Pit area positioning method and device Download PDF

Info

Publication number
CN107993233B
CN107993233B CN201610950780.5A CN201610950780A CN107993233B CN 107993233 B CN107993233 B CN 107993233B CN 201610950780 A CN201610950780 A CN 201610950780A CN 107993233 B CN107993233 B CN 107993233B
Authority
CN
China
Prior art keywords
pit area
pit
image
area
current environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610950780.5A
Other languages
Chinese (zh)
Other versions
CN107993233A (en
Inventor
孟令江
欧勇盛
吕琴
江国来
王志扬
冯伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201610950780.5A priority Critical patent/CN107993233B/en
Publication of CN107993233A publication Critical patent/CN107993233A/en
Application granted granted Critical
Publication of CN107993233B publication Critical patent/CN107993233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of vision, and provides a method and a device for positioning a pit area, wherein the method comprises the following steps: detecting pit areas in left and right images shot by a binocular camera aiming at the current environment; acquiring a corresponding relation between the pit area in the left image and the pit area in the right image, wherein the pit area in the left image and the corresponding pit area in the right image represent the same pit area in the current environment; acquiring a parallax value of a pit area in the left image and a corresponding pit area in the right image; and positioning the same pit area in the current environment according to the parallax value of the pit area in the left image and the corresponding pit area in the right image. The pit area can be located by the present invention.

Description

Pit area positioning method and device
Technical Field
The invention belongs to the technical field of vision, and particularly relates to a pit area positioning method and device.
Background
At present, a plurality of pit areas exist in a field environment, the edge profiles of the pit areas are usually irregular, the robot can detect the pit areas in the traveling process, and the robot needs to position the pit areas after detecting and identifying the pit areas.
Therefore, a new technical solution is needed to solve the above technical problems.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for positioning a pit area to position the pit area.
In a first aspect of the embodiments of the present invention, a method for positioning a pit area is provided, where the method includes:
detecting pit areas in left and right images shot by a binocular camera aiming at the current environment;
acquiring a corresponding relation between the pit area in the left image and the pit area in the right image, wherein the pit area in the left image and the corresponding pit area in the right image represent the same pit area in the current environment;
acquiring a parallax value of a pit area in the left image and a corresponding pit area in the right image;
and positioning the same pit area in the current environment according to the parallax value of the pit area in the left image and the corresponding pit area in the right image.
In a second aspect of the embodiments of the present invention, there is provided an apparatus for locating a pit area, the apparatus including:
the pit area detection module is used for detecting pit areas in left and right images shot by the binocular camera aiming at the current environment;
a relation obtaining module, configured to obtain a corresponding relation between the left image middle pit area and the right image middle pit area, where the left image middle pit area and the right image middle pit area represent the same pit area in the current environment;
a parallax value obtaining module, configured to obtain a parallax value of a pit area in the left image and a corresponding pit area in the right image;
and the positioning module is used for positioning the same pit area in the current environment according to the parallax value of the pit area in the left image and the corresponding pit area in the right image.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: according to the method and the device, the corresponding relation between the pit areas in the left image and the pit areas in the right image is obtained by detecting the pit areas in the left camera and the right camera shot by the binocular camera aiming at the current environment, the same pit area in the current environment is represented by the pit areas in the left image and the right image, and the same pit area in the current environment represented by the pit areas in the left image and the right image is positioned according to the parallax value of the pit areas in the left image and the corresponding pit areas in the right image, namely the distance between the pit areas in the current environment and the binocular camera, the azimuth angle, the actual width of the pit areas in the current environment and the like are obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a pit area extraction method according to an embodiment of the present invention;
FIG. 2a is an exemplary diagram of a third region divided using five rectangles; FIG. 2b is an exemplary graph of a gradient of variation in gray scale values of the divided third region of FIG. 2 a;
FIG. 3 is a practical drawing of a rotational stretch of a first region;
fig. 4 is an exemplary diagram of overlapping of a plurality of pit areas characterizing the same area;
FIG. 5a is an exemplary diagram of a pit area taken from the left image; FIG. 5b is an exemplary diagram of a corresponding pit area taken from the right image; FIG. 5c is a disparity map of the image shown in FIG. 5a and the image shown in FIG. 5 b; fig. 5d is an exemplary view of two regions taken from the pit region positions of the parallax map shown in fig. 5c to the left and right sides;
FIG. 6a is an exemplary diagram of a minimum distance of a pit area from a binocular camera in a current environment; FIG. 6b is an exemplary diagram of the maximum distance of the pit area from the binocular camera in the current environment;
FIG. 7 is an exemplary illustration of the azimuth of the left camera of the binocular camera and the pit area in the current environment;
fig. 8 is a schematic composition diagram of a positioning apparatus for pit areas according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The first embodiment is as follows:
fig. 1 shows an implementation flow of a method for positioning a pit area according to an embodiment of the present invention, where the implementation flow is detailed as follows:
step S101 detects a pit area in left and right images captured by the binocular camera for the current environment.
In the embodiment of the invention, when the binocular camera is used for shooting the current environment, a left image and a right image are formed for the same shot object. The pit area in the left image or the pit area in the right image shot by the binocular camera for the current environment can be detected through the steps (1) to (9), wherein the left image or the right image is an image to be detected. The method comprises the following specific steps:
(1) and acquiring the gray value of each pixel in the image to be detected.
In the embodiment of the invention, the image to be measured directly acquired by the binocular camera is a color image and has a lot of interference, and in addition, a lot of fine objects (such as gravel) exist in the field environment and can cause image interference. Therefore, the image to be detected is preprocessed before the gray value of each pixel in the image to be detected is acquired. The pretreatment was as follows: in the field environment, the interferences are usually abrupt noises without any rule, and can be removed by a nonlinear filter, wherein the common nonlinear filter comprises a median filter and a bilateral filter, the median filter has higher processing speed, and the bilateral filter has better processing effect. And after the noise is removed, the image to be measured is converted into a gray image from a color image. In order to further highlight the characteristics of a large block of area in the image to be detected, a closed operation in the morphological processing can be adopted to eliminate the small black hole.
(2) And segmenting the image to be detected according to the gray value of each pixel in the image to be detected and a plurality of preset segmentation threshold values respectively, and acquiring a first region corresponding to each preset segmentation threshold value in the plurality of preset segmentation threshold values in the image to be detected.
In the embodiment of the present invention, the preset segmentation thresholds are a preset segmentation threshold sequence, the preset segmentation thresholds are preset gray values, and the initial values and the end values of the sequence can be set according to actual needs. Preferably, when the image to be measured is substantially the ground, the initial value may be set to be half of the average value of the gray values of all the pixels in the image to be measured, and the end value may be the initial value plus 50. When the image to be measured is mostly non-ground, the initial value may be set to 50, and the end value may be the start value plus 50. The initial values and the final values of the preset segmentation thresholds are obtained through a large number of tests, and the processing requirements of all field environments can be basically met.
In the embodiment of the invention, a series of gray values are used as segmentation thresholds (namely a plurality of preset segmentation thresholds) to perform threshold segmentation on the image to be detected, and the image to be detected is processed by using the characteristics after the threshold segmentation, so that the image to be detected can be processed once and the characteristics cannot be obtained.
Optionally, the segmenting the image to be detected according to the gray value of each pixel in the image to be detected and a plurality of preset segmentation thresholds, and acquiring a corresponding first region of each preset segmentation threshold in the plurality of preset segmentation thresholds in the image to be detected includes:
dividing pixels with the gray value larger than a first preset division threshold value from all pixels of the image to be detected, and determining the pixels as first pixels corresponding to the first preset division threshold value in the image to be detected;
if a plurality of adjacent first pixels corresponding to the first preset segmentation threshold in the image to be detected exist in the image to be detected, combining the corresponding first pixels of the plurality of adjacent first preset segmentation thresholds in the image to be detected to form a third area corresponding to the first preset segmentation threshold in the image to be detected;
acquiring a gray value change gradient of a third region corresponding to the first preset segmentation threshold in the image to be detected, removing the third region of which the gray value change gradient does not meet a second preset condition from the third region corresponding to the first preset segmentation threshold in the image to be detected, and determining that the remaining third region of the first preset segmentation threshold in the image to be detected is a first region corresponding to the first preset segmentation threshold in the image to be detected;
dividing pixels with the gray value larger than a second preset division threshold value from all pixels of the image to be detected, and determining the pixels as first pixels corresponding to the first preset division threshold value in the image to be detected;
if a plurality of adjacent first pixels of the second preset segmentation threshold in the image to be detected exist, combining the corresponding first pixels of the second preset segmentation threshold in the image to be detected to form a third area of the first preset segmentation threshold in the image to be detected;
acquiring a gray value change gradient of a third region corresponding to the second preset segmentation threshold in the image to be detected, removing the third region of which the gray value change gradient does not meet a second preset condition from the third region corresponding to the second preset segmentation threshold in the image to be detected, and determining that the remaining third region of the second preset segmentation threshold in the image to be detected is a first region corresponding to the second preset segmentation threshold in the image to be detected;
and repeating the steps until the preset segmentation thresholds are traversed.
In the embodiment of the present invention, before removing the third region that does not satisfy the first preset condition according to the gray value variation gradient of the third region, a part of regions that do not satisfy the requirement may be removed according to parameters of the camera and engineering requirements, where the limitation of the size of the part of regions may be defined empirically, and assuming that the region is limited to have a minimum width of 15 pixels and a height of 4 pixels, the third region having a width of less than 15 pixels and a height of less than 4 pixels is removed. The gray value change gradient is the gray value change condition of the third area from the edge to a part of the inner part of the area. Usually the gray value variation of the pit area of the field environment from the periphery to the center is from high to low. This property can be used in practice to remove non-pit areas, however, there is also a special case where the grey value in the center of the pit is higher than the grey value in other areas within the pit. Therefore, when the pit region is removed by using the change of the gray level value, only a part from the edge to the inside of the pit region can be considered, and the situation of neglecting the center of the pit can be considered, so that the possibility of missing a shallow pit can be avoided.
Specifically, in order to simplify the calculation, the third region may be divided using five rectangles and the calculation of the change in the gray value may be performed. Fig. 2a is an exemplary diagram of dividing the third area by five rectangles, in which the third area is sectioned from a center line in fig. 2a, and the third area is divided into 10 small areas from top to bottom, that is, the 10 small areas are from the edge of the third area to the center, and then from the center to the edge, the 10 small areas are not overlapped with each other, and the center is the center line in fig. 2 a. The total gray value of each small region is calculated by formula (1):
Figure BDA0001141605580000061
wherein sumkThe total gray value of the kth small region is calculated, rowStart is the starting row of the kth small region, rowEnd is the ending row of the kth small region, colStart is the starting column of the kth small region, colEnd is the ending column of the kth small region, and X (i, j) is the gray value of the pixel at the position of j in the row i and the column j in the kth small region.
The area of each small region is calculated by equation (2):
Nk=((colEnd-colStart)*(rowEnd-rowStart))k (2)
wherein N iskIs the area of the kth small region.
The average gray value of each small region is calculated by formula (3):
averagek=sumk/Nk (3)
wherein the averagekIs the average gray value of the kth small region.
Fig. 2b is a gray value variation gradient of the divided third region in fig. 2 a. Wherein, 11 points on the abscissa represent 10 small regions and a central line from top to bottom in the third region from left to right, the ordinate represents an average gray value, the dotted line represents the gray value variation of each pixel at the central line when the third region is divided, and the solid line represents the average gray value variation of each divided small region. When the average gray value of three small regions which are continuous from the edge inwards is decreased (for example, the gradient from the average gray value of the first small region starting from the upper edge to the average gray value of the second small region is negative, the gradient from the average gray value of the second small region to the average gray value of the third small region is negative, or the gradient from the average gray value of the tenth small region starting from the lower edge to the average gray value of the ninth small region is negative, and the gradient from the average gray value of the ninth small region to the average gray value of the eighth small region is negative), the third region is considered to be a suspected pit region, otherwise, the third region is a non-pit region. As can be seen from the solid line in fig. 2b, the gradient from the average gray scale value of the ninth small region to the average gray scale value of the eighth small region is negative and then positive, which is not satisfactory, so the third region corresponding to fig. 2b is a non-pit region.
For example, the preset segmentation thresholds are 51 preset segmentation thresholds with a grayscale value of 50 to 100, and firstly segment pixels with a grayscale value greater than 50 from all pixels of the image to be measured, if there are multiple adjacent pixels with a grayscale value greater than 50 in the image to be measured, the multiple adjacent pixels with a grayscale value greater than 50 combine to form a third region corresponding to the first preset segmentation threshold 50 in the image to be measured, there may be multiple non-adjacent third regions in the image to be measured, each third region includes multiple adjacent pixels with a grayscale value greater than 50, and a third region with a grayscale value variation gradient not meeting a second preset condition is removed from the multiple third regions; then, pixels with the gray value larger than 51 are segmented from all pixels of the image to be detected, if a plurality of adjacent pixels with the gray value larger than 51 exist in the image to be detected, the plurality of adjacent pixels with the gray value larger than 51 are combined to form a third area corresponding to a second preset segmentation threshold 51 in the image to be detected, a plurality of non-adjacent third areas may exist in the image to be detected, each third area comprises a plurality of adjacent pixels with the gray value larger than 51, and the third area with the gray value variation gradient not meeting a second preset condition is removed from the plurality of third areas; secondly, pixels with the gray value larger than 52 are segmented from all pixels of the image to be detected, if a plurality of adjacent pixels with the gray value larger than 52 exist in the image to be detected, the plurality of adjacent pixels with the gray value larger than 52 are combined to form a third area corresponding to a third preset segmentation threshold 52 in the image to be detected, a plurality of non-adjacent third areas may exist in the image to be detected, each third area comprises a plurality of adjacent pixels with the gray value larger than 52, and the third area with the gray value variation gradient not meeting a second preset condition is removed from the plurality of third areas; and so on until 51 preset segmentation thresholds are traversed.
(3) And acquiring an inclination angle of the corresponding first region of each preset segmentation threshold in the image to be detected relative to the horizontal direction, and rotating the corresponding first region according to the inclination angle so as to enable the first region to be parallel to the horizontal direction.
In the embodiment of the present invention, the inclination angle of the first region with respect to the horizontal direction refers to an inclination angle with respect to the horizontal direction of a line crossing the mass point of the first region, the line having the largest length of the first region.
Optionally, the obtaining an inclination angle of the first region corresponding to each preset segmentation threshold in the image to be detected with respect to the horizontal direction, and rotating the corresponding first region according to the inclination angle includes:
acquiring position information of a centroid of a corresponding first region of each preset segmentation threshold in the image to be detected;
changing the slope of a straight line passing through the centroid within a preset angle range at preset slope intervals, and acquiring the length of the straight line passing through the centroid corresponding to each slope in a first area corresponding to the centroid;
searching the maximum length from the lengths of the straight line which passes through the centroid and corresponds to each obtained slope in the corresponding first region, and obtaining the straight line which passes through the centroid and corresponds to the maximum length;
acquiring a slope corresponding to a straight line passing through the centroid corresponding to the maximum length to obtain an inclination angle of a first region corresponding to the centroid with respect to a horizontal direction;
rotating a first region corresponding to the centroid according to the tilt angle.
Specifically, the position of the centroid of the first area can be calculated by formula (4):
Figure BDA0001141605580000081
wherein M and N respectively represent the range of the horizontal and vertical coordinates of the first region, and xiAnd yiRespectively representing the horizontal and vertical coordinate values of the pixels in the first region.
After the position information of the center of mass of the first region is acquired, the straight line passing through the center of mass can be taken from a preset angle range according to a preset slope interval (for example, 0.1), and the value of the straight line passing through the center of mass is calculated under each slopeAnd the length of the line falling in the first area is found, the maximum length is found, namely the straight line corresponding to the maximum length is found, and the first area is rotated according to the slope corresponding to the straight line. Preferably, since the height of the pit region is smaller than the width of the pit region from the viewpoint of the camera, the predetermined angle range is
Figure BDA0001141605580000082
To
Figure BDA0001141605580000083
(4) And stretching the corresponding first area of each rotated preset segmentation threshold in the image to be detected according to a preset rule.
Optionally, stretching the corresponding first region of each rotated preset segmentation threshold in the image to be detected according to a preset rule includes:
acquiring the height and width of the corresponding first region of each rotated preset segmentation threshold in the image to be detected;
and stretching the height of the corresponding first region of each rotated preset segmentation threshold in the image to be detected, so that the height and the width of the corresponding first region of each rotated preset segmentation threshold in the image to be detected are the same.
In the embodiment of the invention, the height and the width of the rotated first region in the image to be measured are acquired, and the height of the first region in the image to be measured is stretched to be the same as the width of the first region in the image to be measured (because the height of the rotated first region cannot be larger than the width of the region from the perspective of the camera), so that the shape of the first region is stretched to be similar to a circle. Since the first region after the stretching rotation is only to stretch the height of the first region to be the same as the width, each point on the contour of the first region is not stretched, and the first region after the stretching rotation cannot be stretched into a circle, but can be approximated to a circle. Fig. 3 is a view showing a rotational stretch of the first region.
(5) And acquiring the position information of the centroid of the stretched first region corresponding to each preset segmentation threshold in the image to be detected, and acquiring the corresponding circular similarity of the stretched first region according to the position information of the centroid.
Specifically, m points (for example, 18 points) are taken at equal intervals on the contour of the stretched first region, and the positions of the m points in the image to be measured are defined as (x)1,y1),(x2,y2),…,(xm,ym) Calculating the average distance mean of each point to the centroid by equations (5) and (6):
di=sqrt((xi-x0)+(yi-y0)) (5)
Figure BDA0001141605580000091
calculating the circle similarity of the stretched first region according to equations (7), (8) and (9):
disi=abs(di-mean) (7)
Figure BDA0001141605580000092
similarity=mean/meanl (9)
wherein sqrt is a square root function and abs is an absolute value function.
In the embodiment of the invention, a series of points are taken at equal intervals from the contour of the stretched first region when the circle similarity of the stretched first region is calculated, so that the interference of sudden change edge on the circle similarity calculation is greatly reduced, and then the circle similarity is judged by using the relation between the average distance mean from the edge to the centroid and the average distance mean from the edge to a new circle generated by mean.
(6) And removing the stretched first region with the circle similarity smaller than the preset similarity threshold from the stretched first region corresponding to each preset segmentation threshold in the image to be detected, and determining the remaining corresponding stretched first region of each preset segmentation threshold in the image to be detected as a second region corresponding to each preset segmentation threshold in the image to be detected.
(7) And acquiring the flattening rate of each preset segmentation threshold in a corresponding second region in the image to be detected, and the frequency of the second region of the same region appearing under the preset segmentation thresholds in the corresponding second region in the image to be detected by the preset segmentation thresholds.
Optionally, the obtaining of the number of times that the second region representing the same region appears in the corresponding second region of the image to be detected under the multiple preset segmentation thresholds by the multiple preset segmentation thresholds includes:
acquiring position information of a centroid of a second region corresponding to a first preset segmentation threshold in the image to be detected;
acquiring position information of a second region corresponding to a second preset segmentation threshold in the image to be detected;
if the second region corresponding to the second preset segmentation threshold in the image to be detected covers the centroid of the second region corresponding to the first preset segmentation threshold in the image to be detected, determining that the second region corresponding to the second preset segmentation threshold in the image to be detected and the second region corresponding to the first preset segmentation threshold in the image to be detected are the same region, and adding 1 to the number of times that the second region corresponding to the same region appears under the preset segmentation thresholds;
acquiring position information of a centroid of a second region corresponding to the second preset segmentation threshold in the image to be detected;
acquiring position information of a second region corresponding to a third preset segmentation threshold in the image to be detected;
if the second region corresponding to the third preset segmentation threshold in the image to be detected covers the centroid of the second region corresponding to the second preset segmentation threshold in the image to be detected, determining that the second region corresponding to the third preset segmentation threshold in the image to be detected and the second region corresponding to the second preset segmentation threshold in the image to be detected are the same region, and adding 1 to the number of times that the second region corresponding to the same region appears under the preset segmentation thresholds;
and repeating the steps until the preset segmentation thresholds are traversed.
Taking three preset segmentation thresholds of gray values 50, 51 and 52 as an example, if a second region corresponding to the preset segmentation threshold 51 in the image to be detected covers a centroid of the second region corresponding to the preset segmentation threshold 50 in the image to be detected, it is determined that the second region corresponding to the preset segmentation threshold 51 in the image to be detected and the second region corresponding to the preset segmentation threshold 50 in the image to be detected are the same region, and the number of times that the second region corresponding to the same region appears under the three preset segmentation thresholds is 1; if the second region corresponding to the preset segmentation threshold 52 in the image to be detected covers the centroid of the second region corresponding to the preset segmentation threshold 51 in the image to be detected, it is determined that the second region corresponding to the preset segmentation threshold 53 in the image to be detected and the second region corresponding to the preset segmentation threshold 51 in the image to be detected are the same region, and because the second region corresponding to the preset segmentation threshold 51 in the image to be detected and the second region corresponding to the preset segmentation threshold 50 in the image to be detected are the same region, the second regions corresponding to the three preset segmentation thresholds 50, 51, and 52 in the image to be detected are all the same region, and the number of times that the second region corresponding to the same region appears under the three preset segmentation thresholds is 2.
(8) And if the flattening rate of the second area corresponding to each preset segmentation threshold in the image to be detected and the frequency of the second area appearing under the preset segmentation thresholds meet a first preset condition, determining that the second area is a pit area.
In the embodiment of the present invention, the flatness ratio of the second area is a ratio of a height to a width of the second area in the image to be measured.
In the embodiment of the invention, because the shooting angle of the camera and the distance from the shot object are different, the suspected pit area (namely the second area) has different flattening rates in the image to be measuredThe more distant the ellipse is from the camera, the higher the flattening rate of the ellipse is, and conversely, the closer the ellipse is to the camera, the lower the flattening rate is, and correspondingly, the more times the region with the higher flattening rate is detected under the preset segmentation thresholds (i.e. the times of occurrence under the preset segmentation thresholds) is, the less times the region with the lower flattening rate is detected under the preset segmentation thresholds is. Therefore, the suspected pit area can be further determined by using the relationship between the frequency of detecting the area under a plurality of preset segmentation threshold values and the flatness ratio. Assuming that when the pit area is vertically shot by the camera, the number of times of the pit area appearing under the preset division thresholds is 10, the flattening rate is 1, and when the camera is farthest from the pit area, the number of times of the pit area appearing under the preset division thresholds is 1, the flattening rate is 0.26 (namely, assuming that the minimum width of the pit area which can be detected by the camera is 15 pixels, the minimum height is 4 pixels, and the flattening rate is 0.26
Figure BDA0001141605580000121
) This yields the function y 12.1621 x-2.1621, where y represents the number of times a pit region occurs at a number of preset division thresholds and x represents the flattening of the pit region. And (3) if the flattening ratio of the second area in (8) and the number of times of occurrence under a plurality of preset segmentation thresholds satisfy the above formula, determining that the second area is a pit area, and if the above formula is not satisfied, determining that the second area is a non-pit area.
(9) And combining a plurality of pit areas representing the same area in the corresponding pit areas in the image to be detected by the preset segmentation thresholds into one pit area so as to extract the pit area in the image to be detected.
In the implementation of the present invention, since the preset segmentation thresholds may have overlapping portions in the pit areas representing the same area in the corresponding pit areas in the image to be measured, for example, fig. 4 is an exemplary diagram representing that the pit areas of the same area overlap. The plurality of pit areas representing the same area that overlap are merged, and the plurality of pit areas are merged into one pit area including all the plurality of pit areas representing the same area, for example, in the first overlap case in fig. 4, area 1 includes area 2, and the merged pit area is area 1. By merging a plurality of pit areas characterizing the same pit area into one pit area, repeated extraction of pit areas can be avoided.
If the number of pit areas representing the same area is 1, the pit areas are directly extracted without merging.
It should be noted that, in the embodiment of the present invention, a cycle is established according to the preset segmentation thresholds, the number of cycles is the same as the number of preset segmentation thresholds, each cycle is a processing procedure under one preset segmentation threshold, the suspected pit region is removed in each cycle according to the gray value variation gradient, the rotational stretching of the first region, and the circular similarity of the first region, and after the cycle is finished, the possible non-pit region is removed according to the relationship between the flatness ratio of the second region and the number of times that the second region appears under the preset segmentation thresholds.
Step S102, acquiring a corresponding relation between the left image middle pit area and the right image middle pit area, wherein the left image middle pit area and the right image middle pit area represent the same pit area in the current environment.
In the embodiment of the present invention, after the pit region in the left image and the pit region in the right image captured by the binocular camera for the current environment are detected through steps (1) to (9), a corresponding relationship between the pit region in the left image and the pit region in the right image is obtained by using a classical Feature point matching algorithm (e.g., Speed-Up Robust Features (SURF) algorithm, Scale Invariant Feature Transform (SIFT) algorithm), where the pit region in the left image and the pit region in the right image represent the same pit region in the current environment. For example, pit area a1 in the left image corresponds to pit area a2 in the right image, it is determined that pit area a1 in the left image and pit area a2 in the right image both represent pit area a in the current environment, a1 is the imaging in the left image when a is captured using a binocular camera, and a2 is the imaging in the right image when a is captured using a binocular camera.
Step S103, obtaining a disparity value between the pit area in the left image and the corresponding pit area in the right image.
Optionally, the acquiring the disparity value of the pit area in the left image and the corresponding pit area in the right image includes:
acquiring a matching relation between pixels in the pit area in the left image and pixels in the corresponding pit area in the right image;
and acquiring a parallax value of the pit area in the left image and the corresponding pit area in the right image according to the matching relation between the pixels in the pit area in the left image and the pixels in the corresponding pit area in the right image.
In the embodiment of the present invention, before obtaining a matching relationship between pixels in a pit region in the left image and pixels in a corresponding pit region in the right image, in order to further highlight features of the pit region, a preset pixel may be extended to the periphery of the pit region in the left image, and a preset pixel is also extended to the periphery of the corresponding pit region in the right image, where the preset pixel is extended to the periphery of the pit region, so as to obtain a corresponding relationship between pixels in the pit region in the left image and pixels in the corresponding pit region in the right image, and also obtain a corresponding relationship between preset pixels in the periphery of the pit region in the left image and preset pixels in the periphery of the pit region in the right image, when a semi-global matching algorithm (or a region matching algorithm, a global matching algorithm, etc.) is adopted, so as to further detect the pit region. And then, gray level equalization is performed on the pit region after the expansion in the left image and the pit region after the corresponding expansion in the right image so as to reduce the difference caused by camera imaging, and contrast enhancement can also be performed on the pit region after the expansion in the left image and the pit region after the corresponding expansion in the right image so as to further increase the texture characteristics of the pit region. And then obtaining a matching relation between pixels in the expanded pit area in the left image and pixels in the corresponding expanded pit area in the right image according to a semi-global matching algorithm, intercepting the pit area from the left image according to a preset size, and intercepting the corresponding pit area from the right image according to the preset size, wherein the intercepted area with the preset size needs to comprise the pit area and the expanded preset pixels. Such asFIG. 5a is an exemplary diagram of a pit area taken from the left image; fig. 5b is an exemplary view of the corresponding pit area taken from the right image, and fig. 5c is a parallax map of the image shown in fig. 5a and the image shown in fig. 5 b. Fig. 5d is an exemplary diagram of two regions taken from the pit area position of the parallax map shown in fig. 5c to the left and right sides, wherein the two regions taken in fig. 5d have a size half of the size of the pit area in the parallax map, as shown in fig. 5d, the two regions taken in the blocks 3 and 4, the block 1 is the pit area in the parallax map, and the block 2 is the one obtained by reducing the width and height of the pit area in the parallax map by half, so as to take the two regions from the pit area position to the left and right sides, the blocks 3 and 4 have the same size as the block 2 and are parallel to the block 2, and the distance between the blocks 3 and 4 and the block 1 is the width of the block 2. Then obtaining the parallax values of two areas taken from the pit area position of the parallax map to the left and the right and the parallax value of the pit area in the parallax map, and obtaining the parallax values of the pit area in the parallax map according to a formula
Figure BDA0001141605580000141
And respectively calculating the distance between the two taken areas and the binocular camera in the actual shooting environment and the distance between the pit area in the disparity map and the binocular camera in the actual shooting environment, if the distance between the two taken areas and the binocular camera in the actual shooting environment is smaller than the distance between the pit area in the disparity map and the binocular camera in the actual shooting environment, determining that the pit area is a real pit area, and executing the subsequent step S104. And if the distance between the two taken areas and the binocular camera in the actual shooting environment is not less than the distance between the pit area in the disparity map and the binocular camera in the actual shooting environment, determining the pit area as a non-pit area, and removing the non-pit area without executing the subsequent step S104.
Optionally, the obtaining, according to a matching relationship between pixels in the pit area in the left image and pixels in the corresponding pit area in the right image, a disparity value of the pit area in the left image and a disparity value of the corresponding pit area in the right image includes:
acquiring a parallax value of a pixel in a pit area in the left image and a pixel matched with the pixel in a corresponding pit area in the right image;
and acquiring a parallax value of the pit area in the left image and the corresponding pit area in the right image according to the parallax value of the pixels in the pit area in the left image and the matched pixels in the corresponding pit area in the right image.
In the embodiment of the present invention, a semi-global matching algorithm may be adopted to obtain a matching relationship between pixels in a pit area in the left image and pixels in a corresponding pit area in the right image, obtain a disparity value between pixels in the pit area in the left image and pixels in the corresponding pit area in the right image, and use a sum or an average of the disparity values between all pixels in the left image and pixels in the corresponding pit area in the right image as a disparity value between the pit area in the left image and the corresponding pit area in the right image. For example, the left image has five pixels a1, B1, C1, D1 and E1 in total, and the right image also has five pixels a2, B2, C2, D2 and E2, where a1 matches with a2, B1 matches with B2, C1 matches with C2, D1 matches with D2, and E1 matches with E2, the parallax values of a1 and a2 are a, the parallax values of B1 and B2 are B, the parallax values of C1 and C2 are C, the parallax values of D1 and D2 are D, and the parallax values of E1 and E2 are E, and the parallax values of the pit region in the left image and the pit region in the right image may be a + B + C + D + E, or (a + B + C + D + E)/5 + E.
And step S104, positioning the same pit area in the current environment according to the parallax value of the pit area in the left image and the corresponding pit area in the right image.
Optionally, the positioning the same pit area in the current environment according to the disparity value of the pit area in the left image and the corresponding pit area in the right image includes:
and calculating the distance between the same pit area in the current environment and the binocular camera, the actual width of the same pit area in the current environment and the azimuth angle between the same pit area in the current environment and the left camera in the binocular camera according to the parallax values of the pit areas in the left image and the corresponding pit areas in the right image.
Optionally, the calculating, according to the disparity values of the pit areas in the left image and the corresponding pit areas in the right image, the distance between the same pit area in the current environment and the binocular camera, the actual width of the same pit area in the current environment, and the azimuth angle between the same pit area in the current environment and the left camera in the binocular camera includes:
calculating the distance between the same pit area in the current environment and the binocular camera according to the parallax value d of the corresponding pit area in the left image and the right image
Figure BDA0001141605580000161
Wherein f is the focal length of the left camera in the binocular camera, and b is the base length of the left camera and the right camera in the binocular camera;
calculating the actual width to be a preset actual width y according to the distance Z between the same pit area in the current environment and the binocular camera2Corresponding pixel width y of the same pit area in the current environment at the distance Z1
Acquiring the pixel width w of the pit area in the left imagepixel
According to the pixel width w of the pit area in the left imagepixelPreset actual width y2And the actual width is the preset actual width y2The pixel width y of the same pit area in the current environment at the distance Z1Calculating the actual width of the same pit area in the current environment
Figure BDA0001141605580000162
Acquiring the offset pixel number h of the pit area in the left image;
according to the offset pixel number h of the pit area in the left image and the actual width w of the same pit area in the current environmentrealAnd the pixel width w of the pit area in the left imagepixelCalculating an actual offset of the same pit area in the current environment
Figure BDA0001141605580000163
And calculating an azimuth angle alpha (arctan/Z) between the same pit area in the current environment and a left camera in the binocular camera according to the actual offset g of the same pit area in the current environment and the distance Z between the same pit area in the current environment and the binocular camera.
In the embodiment of the invention, as the world coordinate system is superposed with the coordinate system of the left camera in the binocular camera, the distance between the same pit area in the current environment and the binocular camera is calculated according to the focal length of the left camera in the binocular camera.
Assuming that the field angle of the binocular camera is 28 degrees, the photographing angle of the binocular camera is 14 degrees, assuming that the height of the binocular camera is 400mm, and the actual width of the pit area in the current environment that is allowed to be detected is 200mm at the minimum, the minimum distance Z between the pit area and the binocular camera can be obtained according to the exemplary diagram of the minimum distance between the pit area and the binocular camera in the current environment shown in fig. 6aminIs composed of
Figure BDA0001141605580000171
According to the formula
Figure BDA0001141605580000172
The maximum pixel width of the pit area on the imaging plane of the binocular camera can be found to be 53.6 pixels. Where 429 is the focal length of the left camera in the binocular camera, in pixels.
FIG. 6b is a diagram showing an example of the maximum distance between the pit area and the binocular camera in the current environment, assuming that the minimum number of pixels allowed by the range finding of the binocular camera is 2, the minimum number of pixels is 6 in the imaging plane to distinguish the pit area, assuming that the minimum width of the pit area allowed to be detected is 200mm and the depth of the pit is 100mm, the pit area is detected according to the formula
Figure BDA0001141605580000173
Calculating to obtain the maximum distance Z between the pit area and the binocular cameramaxIs 5720 mm.According to the formula
Figure BDA0001141605580000174
The minimum pixel width of the pit area on the imaging plane of the binocular camera is 15 pixels, and the minimum height wmaxpixel15 ≈ 4 pixels (tan14 °).
Since the minimum distance between the pit area and the binocular camera is 1600mm, the pixel width corresponding to the actual width of the pit area in the current environment being 200mm is 53.6 pixels, the maximum distance between the pit area and the binocular camera is 5720mm, the pixel width corresponding to the actual width of the pit area in the current environment being 200mm is 15 pixels, and the functional relationship can be established according to two straight points by taking (1600, 53.6) and (5720, 15) as two points: y is1=-0.00937x1+68.59. Wherein x is1Is the distance Z, y between the pit area and the binocular camera1The actual width of the pit area is 200mm in the current environment, the pixel width of the pit area under different pit distances (the pit distance is the distance between the pit area and a binocular camera) is obtained, and then the pixel width is obtained according to a formula
Figure BDA0001141605580000175
Calculating the actual width w of the pit areareal
Since the world coordinate system coincides with the left camera of the binocular camera, the azimuth angle between the pit area and the left camera of the binocular camera is the included angle α between the Z-axis direction of the left camera and the straight line from the focal point to the pit area, as shown in fig. 7, which is an exemplary diagram of the azimuth angle between the pit area and the left camera of the binocular camera in the current environment, and Z in fig. 7cIs the Z-axis direction, X, of the left cameracIs the X-axis direction of the left camera, b is the baseline length of the left camera and the right camera, and Z is the distance from the pit area to the binocular camera. According to the formula
Figure BDA0001141605580000181
The actual offset of the pit area in the current environment is calculated. And the offset pixel number h of the pit area in the left image is the offset of the abscissa of the central point of the pit area relative to the abscissa of the central point of the left image. The pit area is asThe actual offset in the pre-environment is the actual offset corresponding to the number of offset pixels of the pit area in the left image in the current environment.
According to the method and the device, the corresponding relation between the pit areas in the left image and the pit areas in the right image is obtained by detecting the pit areas in the left camera and the right camera shot by the binocular camera aiming at the current environment, the same pit area in the current environment is represented by the pit areas in the left image and the right image, and the same pit area in the current environment represented by the pit areas in the left image and the right image is positioned according to the parallax value of the pit areas in the left image and the corresponding pit areas in the right image, namely the distance between the pit areas in the current environment and the binocular camera, the azimuth angle, the actual width of the pit areas in the current environment and the like are obtained.
Example two:
fig. 8 is a schematic composition diagram of a positioning apparatus for a pit area according to a second embodiment of the present invention, and for convenience of description, only the parts related to the second embodiment of the present invention are shown, which are detailed as follows:
a pit area detection module 81 for detecting pit areas in left and right images photographed by the binocular camera for the current environment;
a relation obtaining module 82, configured to obtain a corresponding relation between the left image middle pit area and the right image middle pit area, where the left image middle pit area and the right image middle pit area represent the same pit area in the current environment;
a disparity value obtaining module 83, configured to obtain disparity values of a pit region in the left image and a corresponding pit region in the right image;
and a positioning module 84, configured to position the same pit area in the current environment according to a disparity value between the pit area in the left image and the corresponding pit area in the right image.
Optionally, the disparity value obtaining module 83 includes:
a relation obtaining unit 831, configured to obtain a matching relation between pixels in a pit area in the left image and pixels in a corresponding pit area in the right image;
a disparity value obtaining unit 832, configured to obtain a disparity value between a pit area in the left image and a corresponding pit area in the right image according to a matching relationship between pixels in the pit area in the left image and pixels in the corresponding pit area in the right image.
Optionally, the disparity value obtaining unit 832 includes:
the first acquisition subunit is used for acquiring the parallax value of the pixels in the pit area in the left image and the pixels matched with the corresponding pit area in the right image;
and the second acquiring subunit is configured to acquire a disparity value between the pit area in the left image and the corresponding pit area in the right image according to a disparity value between pixels in the pit area in the left image and pixels matched with the corresponding pit area in the right image.
Optionally, the positioning module 84 is configured to:
and calculating the distance between the same pit area in the current environment and the binocular camera, the actual width of the same pit area in the current environment and the azimuth angle between the same pit area in the current environment and the left camera in the binocular camera according to the parallax values of the pit areas in the left image and the corresponding pit areas in the right image.
Optionally, the positioning module 84 includes:
a first calculating unit 841, configured to calculate a distance between the same pit area in the current environment and the binocular camera according to a disparity value d of the pit area in the left image and the corresponding pit area in the right image
Figure BDA0001141605580000191
Wherein f is the focal length of the left camera in the binocular camera, and b is the base length of the left camera and the right camera in the binocular camera;
a second calculating unit 842, configured to calculate an actual width as a preset actual width y according to a distance Z between the same pit area in the current environment and the binocular camera2Corresponding pixel width y of the same pit area in the current environment at the distance Z1
A width obtaining unit 843, configured to obtain a pixel width w of the pit area in the left imagepixel
A third calculation unit 844 for calculating a pixel width w in the left image according to the pit area in the left imagepixelPreset actual width y2And the actual width is the preset actual width y2The pixel width y of the same pit area in the current environment at the distance Z1Calculating the actual width of the same pit area in the current environment
Figure BDA0001141605580000201
An offset obtaining unit 845, configured to obtain an offset pixel number h of the pit area in the left image;
a fourth calculating unit 846 for calculating the actual width w of the same pit area in the current environment according to the offset pixel number h of the pit area in the left image and the actual width w of the same pit area in the current environmentrealAnd the pixel width w of the pit area in the left imagepixelCalculating an actual offset of the same pit area in the current environment
Figure BDA0001141605580000202
A fifth calculating unit 847, configured to calculate an azimuth angle α of the same pit area in the current environment and a left camera of the binocular cameras, which is arctan (g/Z), according to an actual offset g of the same pit area in the current environment and a distance Z between the same pit area in the current environment and the binocular cameras.
The positioning apparatus for pit areas provided in the embodiments of the present invention can be used in the foregoing corresponding first method embodiment, and for details, refer to the description of the foregoing first method embodiment, and are not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the foregoing function distribution may be completed by different functional modules as required, that is, the internal structure of the apparatus is divided into different functional modules, and the functional modules may be implemented in a hardware form or a software form. In addition, the specific names of the functional modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application.
In summary, in the embodiments of the present invention, a corresponding relationship between a pit area in a left image and a pit area in a right image captured by a binocular camera for the current environment is obtained by detecting the pit areas in the left and right cameras, the pit area in the left image and the corresponding pit area in the right image represent the same pit area in the current environment, and the same pit area in the current environment represented by the pit area in the left image and the corresponding pit area in the right image is located according to a disparity value between the pit area in the left image and the corresponding pit area in the right image, that is, a distance between the pit area and the binocular camera in the current environment, an azimuth angle, an actual width of the pit area in the current environment, and the like are obtained.
It will be further understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A method of locating a pit area, the method comprising:
detecting pit areas in left and right images shot by a binocular camera aiming at the current environment;
acquiring a corresponding relation between the pit area in the left image and the pit area in the right image, wherein the pit area in the left image and the corresponding pit area in the right image represent the same pit area in the current environment;
intercepting a pit area from the left image according to a preset size, intercepting a corresponding pit area from the right image according to the preset size, wherein the area intercepted according to the preset size needs to comprise the pit area and expanded preset pixels, acquiring a disparity map of the pit area intercepted from the left image and the pit area intercepted from the right image, and acquiring a first disparity value d of two areas taken from the pit area position of the disparity map to the left side and the right side1A second parallax value d2And a third disparity value d of the pit region in the disparity map3According to the formula
Figure FDA0003470896900000011
Respectively calculating the distance Z between the area taken to the left from the pit area position of the disparity map and the binocular camera in the actual shooting environment1Distance Z between the area taken from the pit area position of the parallax map to the right and the binocular camera in the actual shooting environment2And the distance Z between the pit area in the disparity map and the binocular camera in the actual shooting environment3If the distance between the two taken areas and the binocular camera in the actual shooting environment is smaller than the distance between the pit area in the disparity map and the binocular camera in the actual shooting environment, determining that the pit area is a real pit area; wherein, the values of i are 1,2 and 3 respectively;
acquiring a parallax value of a pit area in the left image and a corresponding pit area in the right image;
according to the parallax value of the pit area in the left image and the corresponding pit area in the right image, positioning the same pit area in the current environment, including: according to the relation y1=-0.00937x1+68.59 determines the actual width as the preset actual width y2Corresponding pixel width y of the same pit area in the current environment at the distance Z1Wherein x is1The distance Z between the pit area and the binocular camera;
acquiring the pit area in the left imageWidth w of pixel in imagepixel
According to the pixel width w of the pit area in the left imagepixelPreset actual width y2And the actual width is the preset actual width y2The pixel width y of the same pit area in the current environment at the distance Z1Calculating the actual width of the same pit area in the current environment
Figure FDA0003470896900000021
Acquiring the offset pixel number h of the pit area in the left image;
according to the offset pixel number h of the pit area in the left image and the actual width w of the same pit area in the current environmentrealAnd the pixel width w of the pit area in the left imagepixelCalculating an actual offset of the same pit area in the current environment
Figure FDA0003470896900000022
And calculating an azimuth angle alpha (arctan/Z) between the same pit area in the current environment and a left camera in the binocular camera according to the actual offset g of the same pit area in the current environment and the distance Z between the same pit area in the current environment and the binocular camera.
2. The method according to claim 1, wherein the obtaining the disparity value of the pit area in the left image and the corresponding pit area in the right image comprises:
acquiring a matching relation between pixels in the pit area in the left image and pixels in the corresponding pit area in the right image;
and acquiring a parallax value of the pit area in the left image and the corresponding pit area in the right image according to the matching relation between the pixels in the pit area in the left image and the pixels in the corresponding pit area in the right image.
3. The method according to claim 2, wherein the obtaining the disparity value of the pit area in the left image and the corresponding pit area in the right image according to the matching relationship between the pixels in the pit area in the left image and the pixels in the corresponding pit area in the right image comprises:
acquiring a parallax value of a pixel in a pit area in the left image and a pixel matched with the pixel in a corresponding pit area in the right image;
and acquiring a parallax value of the pit area in the left image and the corresponding pit area in the right image according to the parallax value of the pixels in the pit area in the left image and the matched pixels in the corresponding pit area in the right image.
4. The method according to any one of claims 1 to 3, wherein said locating the same pit area in the current environment according to the disparity values of the pit area in the left image and the corresponding pit area in the right image comprises:
and calculating the distance between the same pit area in the current environment and the binocular camera, the actual width of the same pit area in the current environment and the azimuth angle between the same pit area in the current environment and the left camera in the binocular camera according to the parallax values of the pit areas in the left image and the corresponding pit areas in the right image.
5. An apparatus for locating a pit area, the apparatus comprising:
the pit area detection module is used for detecting pit areas in left and right images shot by the binocular camera aiming at the current environment;
a relation obtaining module, configured to obtain a corresponding relation between the left image middle pit area and the right image middle pit area, where the left image middle pit area and the right image middle pit area represent the same pit area in the current environment;
a parallax value obtaining module for cutting out a pit area from the left image according to a preset size and cutting out a corresponding pit area from the right image according to a preset sizeThe area cut out by the preset size needs to comprise a pit area and expanded preset pixels, a disparity map of the pit area cut out from the left image and the pit area cut out from the right image is obtained, and a first disparity value d of two areas taken from the pit area position of the disparity map to the left side and the right side is obtained1A second parallax value d2And a third disparity value d of the pit region in the disparity map3According to the formula
Figure FDA0003470896900000031
Respectively calculating the distance Z between the area taken to the left from the pit area position of the disparity map and the binocular camera in the actual shooting environment1Distance Z between the area taken from the pit area position of the parallax map to the right and the binocular camera in the actual shooting environment2And the distance Z between the pit area in the disparity map and the binocular camera in the actual shooting environment3If the distance between the two taken areas and the binocular camera in the actual shooting environment is smaller than the distance between the pit area in the disparity map and the binocular camera in the actual shooting environment, determining that the pit area is a real pit area; wherein, the values of i are 1,2 and 3 respectively;
the positioning module is used for positioning the same pit area in the current environment according to the parallax value of the pit area in the left image and the corresponding pit area in the right image; the positioning module includes:
a first calculating unit, configured to calculate a distance between the same pit area in the current environment and the binocular camera according to a disparity value d of the pit area in the left image and the corresponding pit area in the right image
Figure FDA0003470896900000032
Wherein f is the focal length of the left camera in the binocular camera, and b is the base length of the left camera and the right camera in the binocular camera;
a second calculating unit for calculating the actual width to be the preset actual width y according to the distance Z between the same pit area in the current environment and the binocular camera2At the distance of the same pit area in the current environmentCorresponding pixel width y at Z1The method comprises the following steps: according to the relation y1=-0.00937x1+68.59 determines the actual width as the preset actual width y2Corresponding pixel width y of the same pit area in the current environment at the distance Z1Wherein x is1The distance Z between the pit area and the binocular camera;
a width acquisition unit for acquiring a pixel width w of the pit region in the left imagepixel
A third calculation unit for calculating a pixel width w of the pit area in the left imagepixelPreset actual width y2And the actual width is the preset actual width y2The pixel width y of the same pit area in the current environment at the distance Z1Calculating the actual width of the same pit area in the current environment
Figure FDA0003470896900000041
The offset acquisition unit is used for acquiring the offset pixel number h of the pit area in the left image;
a fourth calculating unit for calculating the actual width w of the same pit area in the current environment according to the offset pixel number h of the pit area in the left imagerealAnd the pixel width w of the pit area in the left imagepixelCalculating an actual offset of the same pit area in the current environment
Figure FDA0003470896900000042
And a fifth calculating unit, configured to calculate an azimuth angle α between the same pit area in the current environment and a left camera in the binocular camera, which is arctan (g/Z), according to an actual offset g of the same pit area in the current environment and a distance Z between the same pit area in the current environment and the binocular camera.
6. The apparatus of claim 5, wherein the disparity value obtaining module comprises:
a relation obtaining unit, configured to obtain a matching relation between pixels in a pit area in the left image and pixels in a corresponding pit area in the right image;
and the parallax value acquisition unit is used for acquiring the parallax value of the pit area in the left image and the corresponding pit area in the right image according to the matching relation between the pixels in the pit area in the left image and the pixels in the corresponding pit area in the right image.
7. The apparatus according to claim 6, wherein the disparity value obtaining unit includes:
the first acquisition subunit is used for acquiring the parallax value of the pixels in the pit area in the left image and the pixels matched with the corresponding pit area in the right image;
and the second acquiring subunit is configured to acquire a disparity value between the pit area in the left image and the corresponding pit area in the right image according to a disparity value between pixels in the pit area in the left image and pixels matched with the corresponding pit area in the right image.
8. The apparatus of any one of claims 5 to 7, wherein the positioning module is configured to:
and calculating the distance between the same pit area in the current environment and the binocular camera, the actual width of the same pit area in the current environment and the azimuth angle between the same pit area in the current environment and the left camera in the binocular camera according to the parallax values of the pit areas in the left image and the corresponding pit areas in the right image.
CN201610950780.5A 2016-10-26 2016-10-26 Pit area positioning method and device Active CN107993233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610950780.5A CN107993233B (en) 2016-10-26 2016-10-26 Pit area positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610950780.5A CN107993233B (en) 2016-10-26 2016-10-26 Pit area positioning method and device

Publications (2)

Publication Number Publication Date
CN107993233A CN107993233A (en) 2018-05-04
CN107993233B true CN107993233B (en) 2022-02-22

Family

ID=62029474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610950780.5A Active CN107993233B (en) 2016-10-26 2016-10-26 Pit area positioning method and device

Country Status (1)

Country Link
CN (1) CN107993233B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349205B (en) * 2019-07-22 2021-05-28 浙江光珀智能科技有限公司 Method and device for measuring volume of object
CN110374045B (en) * 2019-07-29 2021-09-28 哈尔滨工业大学 Intelligent deicing method
CN112204345A (en) * 2020-01-20 2021-01-08 珊口(深圳)智能科技有限公司 Indoor positioning method of mobile equipment, mobile equipment and control system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700414A (en) * 2015-03-23 2015-06-10 华中科技大学 Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN105354856A (en) * 2015-12-04 2016-02-24 北京联合大学 Human matching and positioning method and system based on MSER and ORB
CN106384363A (en) * 2016-09-13 2017-02-08 天津大学 Fast adaptive weight stereo matching algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739553B (en) * 2009-12-10 2012-01-11 青岛海信网络科技股份有限公司 Method for identifying target in parallax image
WO2012023330A1 (en) * 2010-08-16 2012-02-23 富士フイルム株式会社 Image processing device, image processing method, image processing program, and recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700414A (en) * 2015-03-23 2015-06-10 华中科技大学 Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN105354856A (en) * 2015-12-04 2016-02-24 北京联合大学 Human matching and positioning method and system based on MSER and ORB
CN106384363A (en) * 2016-09-13 2017-02-08 天津大学 Fast adaptive weight stereo matching algorithm

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"基于二维最大熵阈值分割的坑识别方法";郭烈等;《计算机工程与应用》;20060816;第226-228页 *
"基于光斑室内移动机器人的定位导航技术";方青松等;《微型机与应用》;20121231;第31卷(第24期);第51-53、57页 *
"基于立体视觉的野外环境坑区域识别研究";孟令江;《http://ir.sia.cn/handle/173321/19675》;20160525;第1-85页 *
"基于视觉的目标定位技术的研究进展";赵霞等;《计算机科学》;20160630;第43卷(第6期);第10-16、43页 *
"野外环境下图像中坑区域的提取方法";孟令江等;《计算机应用》;20160410;第36卷(第4期);第1132-1136页 *

Also Published As

Publication number Publication date
CN107993233A (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN109086724B (en) Accelerated human face detection method and storage medium
CN110992263B (en) Image stitching method and system
CN104751465A (en) ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
JP2012118698A (en) Image processing system
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN104008542B (en) A kind of Fast Corner matching process for specific plane figure
CN104636724B (en) A kind of quick Pedestrians and vehicles detection method of in-vehicle camera based on goal congruence
CN107993233B (en) Pit area positioning method and device
CN104268853A (en) Infrared image and visible image registering method
CN107832674B (en) Lane line detection method
CN108010075B (en) Local stereo matching method based on multi-feature combination
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
CN106504289B (en) indoor target detection method and device
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN110245600A (en) Adaptively originate quick stroke width unmanned plane Approach for road detection
CN109671098B (en) Target tracking method and system applicable to multiple tracking
JP4681592B2 (en) Speed measurement method
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
CN110675442A (en) Local stereo matching method and system combined with target identification technology
JP6585668B2 (en) Object detection device
Pan et al. An efficient method for skew correction of license plate
CN117611525A (en) Visual detection method and system for abrasion of pantograph slide plate
WO2014054124A1 (en) Road surface markings detection device and road surface markings detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant