CN113298869B - Distance measuring method, distance measuring device, computer device, and storage medium - Google Patents

Distance measuring method, distance measuring device, computer device, and storage medium Download PDF

Info

Publication number
CN113298869B
CN113298869B CN202110440724.8A CN202110440724A CN113298869B CN 113298869 B CN113298869 B CN 113298869B CN 202110440724 A CN202110440724 A CN 202110440724A CN 113298869 B CN113298869 B CN 113298869B
Authority
CN
China
Prior art keywords
pixel point
pixel
image
matched
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110440724.8A
Other languages
Chinese (zh)
Other versions
CN113298869A (en
Inventor
吴新桥
郭晓斌
何超林
王昊
李彬
刘岚
蔡思航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Digital Grid Technology Guangdong Co ltd
Original Assignee
China Southern Power Grid Digital Grid Technology Guangdong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Southern Power Grid Digital Grid Technology Guangdong Co ltd filed Critical China Southern Power Grid Digital Grid Technology Guangdong Co ltd
Priority to CN202110440724.8A priority Critical patent/CN113298869B/en
Publication of CN113298869A publication Critical patent/CN113298869A/en
Application granted granted Critical
Publication of CN113298869B publication Critical patent/CN113298869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The application relates to a distance measurement method, a distance measurement device, a computer device and a storage medium. The method comprises the following steps: acquiring a first image and a second image which are shot by a binocular camera and carry a target point to be detected, and determining first position information of a corresponding first pixel point in the first image; determining predicted position information corresponding to the first position information in the second image, and acquiring corresponding predicted pixel points; acquiring at least one pixel point to be matched corresponding to the predicted position information in the second image; the distance between the pixel point to be matched and the predicted pixel point is smaller than a preset distance threshold value; obtaining the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a corresponding second pixel point in the second image, and determining second position information; and determining a parallax value according to the first position information and the second position information, and acquiring a distance by using the parallax value. The method can improve the measurement accuracy of the distance measurement method.

Description

Distance measuring method, distance measuring device, computer device, and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a distance measurement method, apparatus, computer device, and storage medium.
Background
Along with the development of computer vision technology, a binocular distance measurement technology appears, the same distance measurement target point is respectively shot by a binocular camera, pixel points corresponding to the distance measurement target point in two images shot by the binocular camera are found, the parallax value between the binocular cameras can be determined, and the distance between the binocular cameras and the distance measurement target point is determined according to the obtained parallax value.
At present, in order to reduce the time complexity of pixel point matching in two images, the ordinate of the corresponding points of the two images is changed to be consistent through a calibration process, so that the search area is reduced from the two-dimensional space of the images to one-dimensional lines, and the corresponding points only need to be searched in the same line. However, when the camera adopts a wide-angle lens with a large field angle, the photographed image has serious distortion phenomenon in the edge area, so that the images cannot be aligned accurately, and therefore, the measurement precision of the current distance measurement method is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a distance measuring method, apparatus, computer device, and storage medium.
A method of distance measurement, the method comprising:
Acquiring a first image and a second image which are shot by a binocular camera and carry a target point to be detected, and determining first position information of a first pixel point corresponding to the target point to be detected in the first image;
determining predicted position information corresponding to the first position information in the second image, and acquiring predicted pixel points corresponding to the predicted position information;
acquiring at least one pixel point to be matched corresponding to the predicted position information in the second image; the distance between the pixel to be matched and the predicted pixel point is smaller than a preset distance threshold value;
obtaining the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determining second position information of the second pixel point;
and determining a parallax value corresponding to the target point to be detected according to the first position information and the second position information, and acquiring the distance between the binocular camera and the target point to be detected by utilizing the parallax value.
In one embodiment, the determining, in the second image, predicted position information corresponding to the first position information includes: acquiring first reference position information of a first reference pixel point in the first image, and acquiring second reference position information of a second reference pixel point corresponding to the first reference pixel point in the second image; acquiring the relative distance between the first pixel point and the first reference pixel point according to the first reference position information and the first position information; and acquiring the predicted position information according to the relative distance and the second reference position information.
In one embodiment, the obtaining at least one pixel to be matched corresponding to the predicted position information in the second image includes: acquiring a matching region taking the predicted position information as a center and the distance threshold as a radius from the second image, and taking pixel points in the matching region as initial matching pixel points; determining phase angle differences and amplitude differences of each initial matching pixel point and the predicted pixel point; and taking the initial matched pixel point with the phase angle difference smaller than a preset phase angle difference threshold and the amplitude difference smaller than a preset amplitude difference threshold as the pixel point to be matched.
In one embodiment, the obtaining the matching cost of each pixel to be matched and the first pixel includes: determining a current pixel point to be matched from the pixel points to be matched, acquiring a first rectangular window corresponding to the current pixel point to be matched, and acquiring gray values of the pixel points in the first rectangular window; acquiring a first gray scale comparison code corresponding to the current pixel point to be matched according to the gray scale value of each pixel point in the first rectangular window; acquiring a second rectangular window corresponding to the first pixel point and gray values of all pixel points in the second rectangular window; acquiring a second gray scale comparison code corresponding to the first pixel point according to the gray scale value of each pixel point in the second rectangular window; and acquiring the hamming distance between the first gray scale comparison code and the second gray scale comparison code, and determining the matching cost of the current pixel point to be matched and the first pixel point according to the hamming distance.
In one embodiment, before the obtaining the gray value of each pixel point in the first rectangular window, the method further includes: performing region segmentation on the second image by using an image segmentation algorithm, and determining an image region of each pixel point in the second image; the obtaining the gray value of each pixel point in the first rectangular window includes: selecting target pixel points which are the same as the image area of the current pixel point to be matched from the first rectangular window, and acquiring gray values of all target pixel points in the first rectangular window; the obtaining a first gray scale comparison code corresponding to the current pixel to be matched according to the gray scale value of each pixel in the first rectangular window includes: and acquiring the first gray scale comparison code according to the gray scale value of each target pixel point.
In one embodiment, the obtaining the first gray scale comparison code according to the gray scale value of each target pixel point includes: acquiring a gray scale interval in which each target pixel point is located, and a gray scale average value and the number of pixel points corresponding to the gray scale interval; acquiring the total number of pixel points of the target pixel points; acquiring gray comparison values corresponding to the target pixel points according to the gray average value, the number of the pixel points and the total number of the pixel points; and determining the first gray scale comparison code according to the magnitude relation between the gray scale comparison value and the gray scale value of the pixel point to be matched currently.
In one embodiment, the determining the matching cost of the current pixel to be matched and the first pixel according to the hamming distance includes: taking the current pixel point to be matched as a center, and acquiring a matching cost determination area corresponding to the current pixel point to be matched; acquiring an image area in which each pixel point in the matching cost determination area is positioned, and determining a matching cost weight of each pixel point in the matching cost determination area according to the image area; and determining the matching cost by utilizing the matching cost weight and the hamming distance.
A distance measurement device, the device comprising:
the first position determining module is used for acquiring a first image and a second image which are shot by the binocular camera and carry target points to be detected, and determining first position information of a first pixel point corresponding to the target points to be detected in the first image;
a predicted position acquisition module, configured to determine predicted position information corresponding to the first position information in the second image, and acquire a predicted pixel point corresponding to the predicted position information;
the to-be-matched point acquisition module is used for acquiring at least one to-be-matched pixel point corresponding to the predicted position information in the second image; the distance between the pixel points to be matched and the predicted pixel points is smaller than a preset distance threshold value;
The matching cost acquisition module is used for acquiring the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determining second position information of the second pixel point;
the distance to be measured measuring module is used for determining a parallax value corresponding to the target point to be measured according to the first position information and the second position information, and obtaining the distance between the binocular camera and the target point to be measured by utilizing the parallax value.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
The distance measuring method, the distance measuring device, the computer equipment and the storage medium acquire a first image and a second image which are shot by the binocular camera and carry a target point to be measured, and determine first position information of a first pixel point corresponding to the target point to be measured in the first image; determining predicted position information corresponding to the first position information in the second image, and acquiring predicted pixel points corresponding to the predicted position information; acquiring at least one pixel point to be matched corresponding to the predicted position information in the second image; the distance between the pixel point to be matched and the predicted pixel point is smaller than a preset distance threshold value; obtaining the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determining second position information of the second pixel point; and determining a parallax value corresponding to the target point to be detected according to the first position information and the second position information, and acquiring the distance between the binocular camera and the target point to be detected by using the parallax value. According to the method, the pixel point to be matched, the distance between the pixel point to be matched and the predicted pixel point is smaller than the distance threshold value, is found in the second image, and the pixel point to be matched with the minimum matching cost is used as the second pixel point, so that the influence that the image cannot be aligned accurately can be reduced on the premise of ensuring the time complexity of the pixel point to be matched, and the measuring precision of the distance measuring method is improved.
Drawings
FIG. 1 is a flow chart of a distance measurement method according to an embodiment;
FIG. 2 is a flow diagram of determining predicted location information in one embodiment;
FIG. 3 is a flowchart of acquiring pixels to be matched in one embodiment;
FIG. 4 is a flow chart of obtaining a matching cost with a first pixel in one embodiment;
FIG. 5 is a flowchart of a first gray scale comparison code acquisition process according to an embodiment;
FIG. 6 is a schematic representation of predicted coordinates in one example application;
FIG. 7 is a diagram showing a meanshift segmentation effect in an example application;
FIG. 8 is a diagram showing the effect of window histogram transformation in an example application;
FIG. 9 is a block diagram showing the configuration of a distance measuring device according to an embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a distance measurement method is provided, where the method is applied to a terminal to illustrate the method, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
Step S101, a terminal acquires a first image and a second image which are shot by a binocular camera and carry a target point to be detected, and determines first position information of a first pixel point corresponding to the target point to be detected in the first image.
The first image and the second image are respectively images obtained by two cameras in the binocular camera, or may be partial images extracted from the images obtained by the two cameras through a target detection algorithm, the first image and the second image may carry a target point to be detected, that is, a pixel point corresponding to the target point to be detected, where the first pixel point is a pixel point corresponding to the target point to be detected in the first image, and the first position information refers to position information of the first pixel point in the first image, for example, may be position coordinates in an image coordinate system corresponding to the first image. Specifically, the terminal may read the first image and the second image obtained by photographing with the binocular camera, and find a first pixel point corresponding to the target point to be measured and first position information of the first pixel point from the first image.
In step S102, the terminal determines predicted position information corresponding to the first position information in the second image, and obtains predicted pixel points corresponding to the predicted position information.
The predicted pixel point is a pixel point which is estimated by the terminal according to the first position information and is matched with the corresponding position of the first pixel point in the second image, the predicted position information is the position information of the predicted pixel point in the second image, for example, the predicted pixel point can be determined by the relative position relation of the first image and the second image aiming at the same pixel point, and the relative position relation can be preset in the terminal and matched with the binocular camera for shooting. After obtaining the first position information of the first pixel point, the terminal can find the corresponding predicted position information from the second image based on the relative position information, and takes the pixel point in the predicted position information as the predicted pixel point.
Step S103, the terminal obtains at least one pixel point to be matched corresponding to the predicted position information in the second image; the distance between the pixel points to be matched and the predicted pixel points is smaller than a preset distance threshold.
Then, the terminal may obtain, from the second image, the pixel to be matched with the first pixel according to the predicted position information in the second image, where the predicted pixel corresponding to the predicted position information may not be the pixel actually corresponding to the target point to be detected in the second image, because there may be a shooting error or a deviation of the target detection algorithm in extracting a part of the image, so the terminal needs to determine the pixel to be matched that may be the pixel actually corresponding to first. Specifically, the terminal may determine, based on the preset distance threshold, a pixel point, which is smaller than the distance threshold from the predicted pixel point, from the second image based on the position information of the predicted pixel point, as the pixel point to be matched.
Step S104, the terminal obtains the matching cost of each pixel point to be matched and the first pixel point, takes the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determines second position information of the second pixel point.
The second pixel point refers to a pixel point actually corresponding to the target point to be measured in the second image, and the second position information may refer to a position coordinate of the second pixel point in an image coordinate system corresponding to the second image. Specifically, after obtaining the at least one pixel point to be matched, the terminal may calculate a matching cost with the first pixel point for each pixel point to be matched, and use the pixel point to be matched with the matching cost being the smallest as a final second pixel point, and determine position information corresponding to the second pixel point as second position information.
Step S105, the terminal determines a parallax value corresponding to the target point to be detected according to the first position information and the second position information, and obtains the distance between the binocular camera and the target point to be detected by using the parallax value.
And finally, the terminal can calculate the parallax value between the binocular cameras according to the obtained first position information and the second position information, and determine the distance between the binocular cameras and the target point to be detected based on the obtained parallax value.
In addition, before the determination of the parallax value in this embodiment, a left-right consistency test may be performed on the images captured by the binocular camera, where the left image and the right image may be respectively used as the first image, and the second image may be respectively used as the right image and the left image, so that by the above steps, the parallax value in the case that the left image is used as the first image and the right image is used as the second image, and the parallax value in the case that the right image is used as the first image and the left image is used as the second image may be respectively calculated, so as to obtain two parallax values, and by using a left-right consistency test method, whether the obtained two parallax values are consistent is determined, if so, the distance is calculated by using the parallax values, and if not consistent, whether the target point to be detected is blocked or mismatched is determined by whether the polar line intersects. For the abnormal value of the shielding, searching the parallax value of the first abnormal point left and right in the horizontal direction of the abnormal point, and then selecting the smaller value of the parallax value and the parallax value as the parallax value of the abnormal point. And for the mismatching abnormal value, selecting the parallax value of the non-abnormal point with the nearest gray value in the neighborhood of the abnormal point as the parallax value of the point.
In the distance measuring method, a terminal acquires a first image and a second image which are shot by a binocular camera and carry a target point to be measured, and determines first position information of a first pixel point corresponding to the target point to be measured in the first image; determining predicted position information corresponding to the first position information in the second image, and acquiring predicted pixel points corresponding to the predicted position information; acquiring at least one pixel point to be matched corresponding to the predicted position information in the second image; the distance between the pixel point to be matched and the predicted pixel point is smaller than a preset distance threshold value; obtaining the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determining second position information of the second pixel point; and determining a parallax value corresponding to the target point to be detected according to the first position information and the second position information, and acquiring the distance between the binocular camera and the target point to be detected by using the parallax value. According to the method, the pixel point to be matched, the distance between the pixel point to be matched and the predicted pixel point is smaller than the distance threshold value, is found in the second image, and the pixel point to be matched with the minimum matching cost is used as the second pixel point, so that the influence that the image cannot be aligned accurately can be reduced on the premise of ensuring the time complexity of the pixel point to be matched, and the measuring precision of the distance measuring method is improved.
In one embodiment, as shown in fig. 2, step S102 may further include:
in step S201, the terminal obtains first reference position information of a first reference pixel point in the first image, and obtains second reference position information of a second reference pixel point corresponding to the first reference pixel point in the second image.
The first reference pixel point may be a pixel point in the first image, for example, a certain boundary point in the first image, or a certain image center point in the first image, and the second reference pixel point is a pixel point corresponding to the first reference pixel point in the second image. For example, the terminal may select an upper left corner vertex of the first image from the truncated first image as a first reference pixel point, and use an upper left corner vertex of the second image as a second reference pixel point, and determine position information corresponding to the first reference pixel point and position information corresponding to the second reference pixel point, as the first reference position information and the second reference position information, respectively.
Step S202, a terminal obtains the relative distance between a first pixel point and a first reference pixel point according to first reference position information and first position information;
In step S203, the terminal obtains predicted position information according to the relative distance and the second reference position information.
And then, the terminal can calculate the relative distance relation between the first pixel point and the first reference pixel point according to the obtained first reference position information and the first position information, and obtain the predicted position information of the predicted pixel point by using the relative distance and the obtained second reference position information in the second image.
For example, the predicted position information may be calculated by the following formula: x is x q =x qr +dx,y q =y qr +dy,dx=x p -x pr ,dy=y p -y pr . Wherein, (x) q ,y q ) Representing predicted position information, (x) p ,y p ) Representing the first position information, (x) qr ,y qr ) Representing the second reference position information, (x) pr ,y pr ) The first reference position information is represented, and dx and dy represent the relative distance relationship between the first pixel point and the first reference pixel point in the x direction and the y direction, respectively.
Further, as shown in fig. 3, step S103 may further include:
in step S301, the terminal acquires a matching region with the predicted position information as the center and the distance threshold as the radius in the second image, and uses the pixel point in the matching region as the initial matching pixel point.
The initial matching pixel points refer to all pixel points in the second image, wherein the distance between the pixel points and the predicted pixel points is smaller than a preset distance threshold value, specifically, the terminal can obtain a circular matching area by taking the predicted position information of the predicted pixel points in the second image as a center and taking the distance threshold value as a radius, and the pixel points in the matching area are taken as the initial matching pixel points.
Step S302, the terminal determines the phase angle difference and the amplitude difference between each initial matched pixel point and the predicted pixel point;
in step S303, the terminal uses the initial matching pixel point whose phase angle difference is smaller than the preset phase angle difference threshold value and whose amplitude difference is smaller than the preset amplitude difference threshold value as the pixel point to be matched.
And then, the terminal can respectively calculate the phase angle difference and the amplitude difference of each initial matching pixel point and the predicted pixel point, and the initial matching pixel point with the phase angle difference smaller than a preset phase angle difference threshold value and the amplitude difference smaller than a preset amplitude difference threshold value is used as the final pixel point to be matched.
For example, the final pixel to be matched needs to satisfy the following constraint conditions: d (D) s =(p i ,p)<L,D φ =(p i ,p)<θ,D m =(p i P) < γ, wherein D s =(p i P) represents the distance between the pixel point to be matched and the predicted pixel point, D φ =(p i P) represents the phase angle difference between the pixel point to be matched and the predicted pixel point, D m =(p i P) represents the phase angle difference between the pixel point to be matched and the predicted pixel point, and L, θ, γ represent the distance threshold, the phase angle difference threshold, and the amplitude difference threshold, respectively.
In this embodiment, the terminal may determine the predicted position information based on the first reference position information and the relative distance between the first position information, find the initial matched pixel point in the matching area based on the obtained predicted position information, and determine the final pixel point to be matched according to the phase angle difference and the amplitude difference between the initial matched pixel point and the predicted pixel point, thereby perfecting the matching area needing to perform pixel point matching and further improving the accuracy of distance measurement.
In one embodiment, as shown in fig. 4, step S104 may further include:
in step S401, the terminal determines a current pixel to be matched from the pixels to be matched, obtains a first rectangular window corresponding to the current pixel to be matched, and obtains a gray value of each pixel in the first rectangular window.
The current pixel point to be matched can be any pixel point in the pixel points to be matched, and the first rectangular window can be a rectangular window area taking the current pixel point to be matched as a center, and the rectangular window area can comprise a plurality of pixel points. Specifically, the terminal can determine a current pixel to be matched from the pixels to be matched, select a rectangular window according to a preset area range by taking the current pixel to be matched as a center, obtain a corresponding first rectangular window, and obtain a gray value of each pixel in the first rectangular window.
Step S402, the terminal obtains a first gray scale comparison code corresponding to the current pixel to be matched according to the gray scale value of each pixel in the first rectangular window.
Then, the terminal can calculate the first gray scale comparison code corresponding to the current pixel to be matched corresponding to the rectangular window according to the gray scale value of each pixel in the first rectangular window, for example, the magnitude relation between the gray scale value of each pixel in the first rectangular window and the gray scale value of the current pixel to be matched can be compared in a census conversion mode, and the first gray scale comparison code corresponding to the current pixel to be matched is determined according to the magnitude relation.
Step S403, the terminal obtains a second rectangular window corresponding to the first pixel point and gray values of all pixel points in the second rectangular window;
step S404, obtaining a second gray scale comparison code corresponding to the first pixel point according to the gray scale value of each pixel point in the second rectangular window.
Meanwhile, the terminal can also obtain a rectangular window corresponding to the first pixel point in a similar manner to the steps S401 and S402, and the rectangular window is used as a second rectangular window, and the gray value of each pixel point in the second rectangular window is obtained, so that the gray comparison code corresponding to the first pixel point, namely the second gray comparison code, is obtained.
In step S405, the terminal obtains the hamming distance between the first gray scale comparison code and the second gray scale comparison code, and determines the matching cost of the current pixel point to be matched and the first pixel point according to the hamming distance.
Finally, the terminal can calculate the hamming distance between the current pixel point to be matched and the first pixel point according to the obtained first gray scale comparison code and the second gray scale comparison code, and further utilize the obtained hamming distance to perform matching cost aggregation, so as to obtain the matching cost of the current pixel point to be matched and the first pixel point, and further obtain the matching cost of each pixel point to be matched and the first pixel point in a mode from step S401 to step S405.
Further, in step S401, before the terminal obtains the gray value of each pixel point in the first rectangular window, the method further includes: the terminal performs region segmentation on the second image by using an image segmentation algorithm, and determines the image region of each pixel point in the second image; step S401 may further include: the terminal selects target pixel points which are the same as the image area of the pixel points to be matched currently from the first rectangular window, and acquires the gray value of each target pixel point in the first rectangular window; step S402 may further include: and the terminal acquires a first gray comparison code according to the gray value of each target pixel point.
The image segmentation algorithm may be a Meanshift segmentation algorithm, the algorithm may segment the second image into a plurality of different image areas, then the terminal may determine the segmented image area where each pixel point in the second image is located, in the process of determining the gray value of each pixel point in the first rectangular window, the terminal may select a pixel point which is the same as the image area of the current pixel point to be matched from the first rectangular window as a target pixel point, and in the process of obtaining the first gray comparison coding of the current pixel point to be matched, the algorithm only obtains the first gray comparison coding according to the gray value of the target pixel point, by this way, the interference caused by the pixel or the noise point possibly existing in the first rectangular window to the obtaining of the first gray comparison coding can be avoided, so that the accuracy of the obtained first gray comparison coding can be improved.
Similarly, the pixel points in the second rectangular window may be similarly processed according to the above manner, and the second gray scale comparison code may be obtained based on the gray scale value of the pixel point that remains after the processing.
Further, as shown in fig. 5, the terminal may further obtain the first gray scale comparison code according to the gray scale value of each target pixel point, and the method may further include:
step S501, a terminal acquires a gray scale interval in which each target pixel point is located, a gray scale average value corresponding to the gray scale interval and the number of the pixel points;
in step S502, the terminal obtains the total number of pixels of the target pixel.
The gray scale interval refers to an interval divided by a user according to a gray scale value range in advance, for example, for target pixel points with gray scale values from 122 to 178, the gray scale interval can be divided into five gray scale intervals of [122, 133], [133, 144], [144, 155], [155, 166] and [166, 178], and meanwhile, the terminal can also count the gray scale interval in which each target pixel point in the first rectangular window is located, the number of pixel points of the target pixel point included in each gray scale interval, and the total number of pixel points of the target pixel point included in the first rectangular window.
Step S503, the terminal obtains gray comparison values corresponding to all target pixel points according to the gray average value, the number of pixel points and the total number of pixel points;
Step S504, the terminal determines a first gray scale comparison code according to the magnitude relation between the gray scale comparison value and the gray scale value of the pixel point to be matched currently.
Then, the terminal can determine the gray comparison value corresponding to each target pixel according to the gray average value of the gray interval where each target pixel is located, the number of pixels and the total number of pixels of all the target pixels, and then determine the final first gray comparison code based on the magnitude relation between the gray comparison value and the gray value of the current pixel to be matched.
For example: the gray scale interval where a certain target pixel point is located is interval a, the gray scale average value corresponding to interval a is average value a, and the number of pixels corresponding to interval a is number a, then the gray scale comparison value corresponding to the target pixel point can be calculated according to the average value a, the number of pixels a, and the total number of target pixel points by the formulaThe method comprises the steps of calculating, wherein V (P) represents a gray level comparison value of a target pixel point, Ω (P) represents a set of gray level intervals, P ' represents a gray level interval where a certain target pixel point is located, namely, interval A, ω (P ') represents the number of pixels in the gray level interval, namely, number A, f (P ') represents a gray level average value of the gray level interval, namely, average value A, sum (Ω (P)) represents the total number of target pixel points.
Further, step S405 may further include: the terminal takes the current pixel point to be matched as a center, and a matching cost determining area corresponding to the current pixel point to be matched is obtained; acquiring an image area in which each pixel point in the matching cost determination area is positioned, and determining a matching cost weight of each pixel point in the matching cost determination area according to the image area; and determining the matching cost by using the matching cost weight and the hamming distance.
The matching cost determination area is a rectangular window area with width of m and height of n, which is constructed by taking the current pixel point to be matched as a center, is used as a matching cost determination area, and a segmented image area where each pixel point in the matching cost area is located is determined, so that the matching cost weight of each pixel point in the matching cost area is determined, for example, whether each pixel point in the matching cost area and the current pixel point to be matched are in the same image area can be judged, if the pixel points are the same, the matching cost weight is set to be 1, if the pixel points are not the same, the matching cost weight is set to be 0.1, and finally, the matching cost of the current pixel point to be matched and the first pixel point is calculated by utilizing the matching cost weight and the obtained Hamming distance between the current pixel point to be matched and the first pixel point.
In the above embodiment, the method of selecting the rectangular window to calculate the first gray scale comparison code and the second gray scale comparison code respectively, so as to obtain the matching cost of the current pixel to be matched and the first pixel, so that the matching precision of the obtained matching cost can be improved, meanwhile, the calculation of the first gray scale comparison code is obtained by using the gray scale value of the target pixel which is the same as the image area of the current pixel to be matched after image segmentation, and the embodiment further uses the gray scale average value and the number of the pixel points of the gray scale interval where each target pixel is located and the total number of the pixel points of the target pixel to obtain the gray scale comparison value of each target pixel, and obtains the first gray scale comparison code based on the gray scale comparison value, so that the pixel values of different areas and depths can not be affected each other, thereby improving the matching effect at depth discontinuity and shielding positions. In addition, in the embodiment, the final matching cost is set by setting the matching cost weight of each pixel point in the matching cost determination area, so that the interference of pixels at the shielding position or the depth discontinuity position can be reduced, and the matching precision is improved.
In an application example, a region searching algorithm based on target detection is further provided, a searching path of the matching points is changed into a local region from a single line, the problem that images cannot be aligned accurately is solved, and the searching region is perfected by combining a crisscross method and gradient information. An optimized same-domain census algorithm is provided by means of segmentation and gray distribution information, pixel values of different areas and depths cannot be affected mutually, accordingly, matching effects at depth discontinuities and shielding positions are improved, and finally errors on a parallax image are corrected by means of a left-right consistency principle. The specific method comprises the following steps:
(1) And acquiring object position information of target detection, and acquiring predicted coordinates of corresponding points by using the relative position relation between the pixel points to be detected and the rectangular frame through the following formula.
x q =x qr +dx
y q =y qr +dy
dx=x p -x pr
dy=y p -y pr
In (x) q ,y q ) For the predicted coordinates of the point to be measured in the target image, (x) qr ,y qr ) For the object image interior point (x q ,y q ) The upper left corner vertex coordinates of the region. dx and dy are the upper left corner vertices (x pr ,y pr ) And the point to be measured (x) p ,y p ) Is a relative distance of (c). Then by (x) q ,y q ) For the center point, a region search is performed to determine the point to be measured (x p ,y p ) As shown in fig. 6, the search area is simply represented by a rectangular box of width j and height i in fig. 6, but in practice the area is generally irregularly shaped.
(2) And perfecting the search area by combining the crisscross method and gradient information. First, a distance threshold L of the region is established, and the maximum area of the search region is obtained with the predicted coordinates as the center. For the case that the illumination intensities of the left and right images are basically different in the outdoor environment, the gradient information of the pixels is utilized to screen the pixels. And comparing the relation between the gradient amplitude and the phase angle of the pixel to be detected and each pixel in the search area, so as to judge whether the pixel is added into the pixel set to be matched.
D s =(p i ,p)<L
D φ =(p i ,p)<θ
D m =(p i ,p)<γ
Wherein D is φ (p i P) and D m (p i P) is the phase angle difference and amplitude difference of two pixels, respectively, θ and γ are the thresholds of phase angle and amplitude, p i And p is the pixel point to be detected for the pixel point in the search area. G x And G y The components of the pixel gradient vector in the x and y directions are respectively, and m and phi are the magnitude and phase angle of the gradient. After traversing all pixels in the search area, a set of pixels S (P) to be matched can be obtained.
(3) Firstly, dividing different areas of the image by using Meanshift segmentation, eliminating pixels which are different from the area where the central point is positioned in the window,and constructing a gray level histogram of the residual pixels, acquiring a weight value through the histogram, and calculating a gray level comparison value in a weighted mode. And when the cost is aggregated, calculating the weight of the matching cost according to the segmentation information to improve the matching effect at the depth discontinuity and the shielding position. The MeanShift algorithm controls the chromaticity domain bandwidth h r Space domain bandwidth h s And a minimum region limit M to achieve the final segmentation effect, as shown in fig. 7.
When the gray scale comparison value of the window is calculated, the pixel points which are not in the same dividing area with the central point in the window are removed according to the dividing result, so that the pixels or noise points at the shielding position cannot interfere with the acquisition of the comparison value. Then, a gray histogram is constructed according to the residual pixels to obtain the weight of gray values, and as shown in fig. 8, the comparison value of gray is obtained by using the following formula and the weight:
Where V (P) is a calculated gray scale comparison value, Ω (P) is a set of gray scale intervals, P ' represents a certain interval, ω (P ') is the number of pixels in the P ' interval, sum (Ω (P)) is the number of pixels in all gray scale intervals, and f (P ') is the average value of the gray scale of the pixels in the P ' interval.
After the comparison value is obtained through the calculation, and Census transformation is completed by utilizing the comparison value, the Hamming distance of the bit strings corresponding to the two pixel points is used for representing the matching cost of the two points, and the expression is as follows:
C(x,y,d)=Hamming(CT L (x,y),CT R (x-d,y))
wherein C (x, y, d) is the matching cost of pixel (x, y) when parallax is d, CT L (x, y) is the bit string of the left image pixel (x, y), CT R (x-d, y) is the bit string of the right image pixel (x-d, y).
After the matching cost calculation is finished, the segmentation information is converted into corresponding weight coefficients, and the weight coefficients are added into a cost aggregation formula.
Wherein a rectangular window with width m and height n is constructed by taking (x, y) as the center, and omega is formed when (i, j) and (x, y) are in the same dividing region s =1; omega when not in the same region s =0.1. When the window is positioned in the edge area, the interference of pixels at the shielding position or the depth discontinuous position can be reduced, and the matching precision is improved. Finally, an initial parallax value is obtained through a WTA strategy.
(4) Screening out a constant value of the visual difference by adopting left-right consistency detection:
|D L (p)-D R (p-D L (p))|>τ
D in L (p) is the parallax value of p point in left image, D R (p-D L (p)) is the parallax value of the right image pixel point corresponding to the p point, and τ is the set threshold value. If the difference of the parallax values of the two pixel points is larger than the set threshold value, the point is considered to be an abnormal value, and then whether the point is shielded or mismatched is judged through whether the polar lines intersect. For the abnormal value of the shielding, searching the parallax value of the first abnormal point left and right in the horizontal direction of the abnormal point, and then selecting the smaller value of the parallax value and the parallax value as the parallax value of the abnormal point. And for the mismatching abnormal value, selecting the parallax value of the non-abnormal point with the nearest gray value in the neighborhood of the abnormal point as the parallax value of the point. And finally, eliminating the phenomena of horizontal stripes and the like by utilizing median filtering.
In the application example, the method based on region search replaces a single-line scanning mode in pixel matching, so that the influence caused when binocular images cannot be aligned accurately is reduced. And the search area is perfected by combining the crisscross method and gradient information, so that the possibility that the corresponding point is not contained in the search area is reduced. The segmentation and gray level distribution information is utilized to provide a same-domain census matching method, and the effect of matching at depth discontinuities and shielding positions is improved by means of a weight value optimization cost aggregation algorithm. And finally, refining the obtained parallax images by means of a left-right consistency principle, so that errors are further reduced.
It should be understood that, although the steps in the flowcharts of fig. 1-5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-5 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 9, there is provided a distance measuring apparatus including: the device comprises a first position determining module 901, a predicted position obtaining module 902, a point to be matched obtaining module 903, a matching cost obtaining module 904 and a distance measuring module 905 to be tested, wherein:
the first position determining module 901 is configured to obtain a first image and a second image, which are captured by a binocular camera and carry a target point to be detected, and determine first position information of a first pixel point corresponding to the target point to be detected in the first image;
A predicted position obtaining module 902, configured to determine predicted position information corresponding to the first position information in the second image, and obtain a predicted pixel point corresponding to the predicted position information;
the to-be-matched point obtaining module 903 is configured to obtain at least one to-be-matched pixel point corresponding to the predicted position information in the second image; the distance between the pixel point to be matched and the predicted pixel point is smaller than a preset distance threshold value;
the matching cost obtaining module 904 is configured to obtain a matching cost of each pixel to be matched and the first pixel, take the pixel to be matched with the matching cost being the smallest as a second pixel corresponding to the target point to be detected in the second image, and determine second position information of the second pixel;
the distance to be measured measuring module 905 is configured to determine a parallax value corresponding to the target point to be measured according to the first position information and the second position information, and obtain a distance between the binocular camera and the target point to be measured by using the parallax value.
In one embodiment, the predicted position obtaining module 902 is further configured to obtain first reference position information of a first reference pixel in the first image, and obtain second reference position information of a second reference pixel corresponding to the first reference pixel in the second image; acquiring the relative distance between the first pixel point and the first reference pixel point according to the first reference position information and the first position information; and acquiring predicted position information according to the relative distance and the second reference position information.
In one embodiment, the to-be-matched point obtaining module 903 is further configured to obtain a matching area with the predicted position information as a center and the distance threshold as a radius in the second image, and take a pixel point in the matching area as an initial matching pixel point; determining phase angle differences and amplitude differences of each initial matching pixel point and the predicted pixel point; and taking the initial matched pixel point with the phase angle difference smaller than the preset phase angle difference threshold and the amplitude difference smaller than the preset amplitude difference threshold as the pixel point to be matched.
In one embodiment, the matching cost obtaining module 904 is further configured to determine a current pixel to be matched from the pixels to be matched, obtain a first rectangular window corresponding to the current pixel to be matched, and obtain a gray value of each pixel in the first rectangular window; acquiring a first gray scale comparison code corresponding to a current pixel point to be matched according to the gray scale value of each pixel point in the first rectangular window; acquiring a second rectangular window corresponding to the first pixel point and gray values of all pixel points in the second rectangular window; acquiring a second gray scale comparison code corresponding to the first pixel point according to the gray scale value of each pixel point in the second rectangular window; and acquiring the Hamming distance between the first gray scale comparison code and the second gray scale comparison code, and determining the matching cost of the current pixel point to be matched and the first pixel point according to the Hamming distance.
In one embodiment, the matching cost obtaining module 904 is further configured to perform region segmentation on the second image by using an image segmentation algorithm, and determine an image region of each pixel point in the second image; selecting target pixel points which are the same as the image area of the pixel points to be matched currently from the first rectangular window, and acquiring gray values of all the target pixel points in the first rectangular window; and acquiring a first gray comparison code according to the gray value of each target pixel point.
In one embodiment, the matching cost obtaining module 904 is further configured to obtain a gray scale interval in which each target pixel is located, and a gray average value and the number of pixels corresponding to the gray scale interval; obtaining the total number of pixel points of the target pixel points; acquiring a gray comparison value corresponding to each target pixel point according to the gray average value, the number of the pixel points and the total number of the pixel points; and determining a first gray comparison code according to the magnitude relation between the gray comparison value and the gray value of the current pixel point to be matched.
In one embodiment, the matching cost obtaining module 904 is further configured to obtain a matching cost determining area corresponding to the current pixel to be matched, with the current pixel to be matched as a center; acquiring an image area in which each pixel point in the matching cost determination area is positioned, and determining a matching cost weight of each pixel point in the matching cost determination area according to the image area; and determining the matching cost by using the matching cost weight and the hamming distance.
For specific limitations of the distance measuring device, reference may be made to the above limitations of the distance measuring method, and no further description is given here. The respective modules in the distance measuring device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a distance measurement method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A method of distance measurement, the method comprising:
acquiring a first image and a second image which are shot by a binocular camera and carry a target point to be detected, and determining first position information of a first pixel point corresponding to the target point to be detected in the first image;
determining predicted position information corresponding to the first position information in the second image, and acquiring predicted pixel points corresponding to the predicted position information;
Acquiring at least one pixel point to be matched corresponding to the predicted position information in the second image; the distance between the pixel points to be matched and the predicted pixel points is smaller than a preset distance threshold value; comprising the following steps: acquiring a matching region taking the predicted position information as a center and the distance threshold as a radius from the second image, and taking pixel points in the matching region as initial matching pixel points; determining phase angle differences and amplitude differences of each initial matching pixel point and the predicted pixel point; taking the initial matched pixel point, of which the phase angle difference is smaller than a preset phase angle difference threshold value and the amplitude difference is smaller than a preset amplitude difference threshold value, as the pixel point to be matched;
obtaining the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determining second position information of the second pixel point; comprising the following steps: determining a current pixel point to be matched from the pixel points to be matched, acquiring a first rectangular window corresponding to the current pixel point to be matched, and acquiring gray values of the pixel points in the first rectangular window; acquiring a first gray scale comparison code corresponding to the current pixel point to be matched according to the gray scale value of each pixel point in the first rectangular window; acquiring a second rectangular window corresponding to the first pixel point and gray values of all pixel points in the second rectangular window; acquiring a second gray scale comparison code corresponding to the first pixel point according to the gray scale value of each pixel point in the second rectangular window; acquiring the hamming distance between the first gray scale comparison code and the second gray scale comparison code, and determining the matching cost of the current pixel point to be matched and the first pixel point according to the hamming distance;
Determining a parallax value corresponding to the target point to be detected according to the first position information and the second position information, and acquiring the distance between the binocular camera and the target point to be detected by using the parallax value;
before the gray value of each pixel point in the first rectangular window is obtained, the method further comprises:
performing region segmentation on the second image by using an image segmentation algorithm, and determining an image region of each pixel point in the second image;
the obtaining the gray value of each pixel point in the first rectangular window includes:
selecting target pixel points which are the same as the image area of the current pixel point to be matched from the first rectangular window, and acquiring gray values of all target pixel points in the first rectangular window;
the obtaining a first gray scale comparison code corresponding to the current pixel to be matched according to the gray scale value of each pixel in the first rectangular window includes:
acquiring a gray scale interval in which each target pixel point is located, and a gray scale average value and the number of pixel points corresponding to the gray scale interval; acquiring the total number of pixel points of the target pixel points; acquiring gray comparison values corresponding to the target pixel points according to the gray average value, the number of the pixel points and the total number of the pixel points; and determining the first gray scale comparison code according to the magnitude relation between the gray scale comparison value and the gray scale value of the pixel point to be matched currently.
2. The method of claim 1, wherein the determining predicted location information corresponding to the first location information in the second image comprises:
acquiring first reference position information of a first reference pixel point in the first image, and acquiring second reference position information of a second reference pixel point corresponding to the first reference pixel point in the second image;
acquiring the relative distance between the first pixel point and the first reference pixel point according to the first reference position information and the first position information;
and acquiring the predicted position information according to the relative distance and the second reference position information.
3. The method of claim 1, wherein the determining the matching cost of the current pixel to be matched and the first pixel according to the hamming distance comprises:
taking the current pixel point to be matched as a center, and acquiring a matching cost determination area corresponding to the current pixel point to be matched;
acquiring an image area in which each pixel point in the matching cost determination area is positioned, and determining a matching cost weight of each pixel point in the matching cost determination area according to the image area;
And determining the matching cost by utilizing the matching cost weight and the hamming distance.
4. The method of claim 1, wherein the image segmentation algorithm is a meanshift segmentation algorithm.
5. A distance measuring device, the device comprising:
the first position determining module is used for acquiring a first image and a second image which are shot by the binocular camera and carry target points to be detected, and determining first position information of a first pixel point corresponding to the target points to be detected in the first image;
a predicted position acquisition module, configured to determine predicted position information corresponding to the first position information in the second image, and acquire a predicted pixel point corresponding to the predicted position information;
the to-be-matched point acquisition module is used for acquiring at least one to-be-matched pixel point corresponding to the predicted position information in the second image; the distance between the pixel points to be matched and the predicted pixel points is smaller than a preset distance threshold value; the method is further used for obtaining a matching area taking the predicted position information as a center and the distance threshold as a radius in the second image, and taking the pixel points in the matching area as initial matching pixel points; determining phase angle differences and amplitude differences of each initial matching pixel point and the predicted pixel point; taking the initial matched pixel point, of which the phase angle difference is smaller than a preset phase angle difference threshold value and the amplitude difference is smaller than a preset amplitude difference threshold value, as the pixel point to be matched;
The matching cost acquisition module is used for acquiring the matching cost of each pixel point to be matched and the first pixel point, taking the pixel point to be matched with the minimum matching cost as a second pixel point corresponding to the target point to be detected in the second image, and determining second position information of the second pixel point; the method is further used for determining a current pixel point to be matched from the pixel points to be matched, obtaining a first rectangular window corresponding to the current pixel point to be matched, and obtaining gray values of the pixel points in the first rectangular window; acquiring a first gray scale comparison code corresponding to the current pixel point to be matched according to the gray scale value of each pixel point in the first rectangular window; acquiring a second rectangular window corresponding to the first pixel point and gray values of all pixel points in the second rectangular window; acquiring a second gray scale comparison code corresponding to the first pixel point according to the gray scale value of each pixel point in the second rectangular window; acquiring the hamming distance between the first gray scale comparison code and the second gray scale comparison code, and determining the matching cost of the current pixel point to be matched and the first pixel point according to the hamming distance;
The distance to be measured measuring module is used for determining a parallax value corresponding to the target point to be measured according to the first position information and the second position information, and acquiring the distance between the binocular camera and the target point to be measured by utilizing the parallax value;
the matching cost acquisition module is also used for carrying out region segmentation on the second image by utilizing an image segmentation algorithm and determining an image region of each pixel point in the second image; selecting target pixel points which are the same as the image area of the current pixel point to be matched from the first rectangular window, and acquiring gray values of all target pixel points in the first rectangular window; acquiring a gray scale interval in which each target pixel point is located, and a gray scale average value and the number of pixel points corresponding to the gray scale interval; acquiring the total number of pixel points of the target pixel points; acquiring gray comparison values corresponding to the target pixel points according to the gray average value, the number of the pixel points and the total number of the pixel points; and determining the first gray scale comparison code according to the magnitude relation between the gray scale comparison value and the gray scale value of the pixel point to be matched currently.
6. The apparatus of claim 5, wherein the predicted position acquisition module is further configured to acquire first reference position information of a first reference pixel in the first image, and acquire second reference position information of a second reference pixel in the second image that corresponds to the first reference pixel; acquiring the relative distance between the first pixel point and the first reference pixel point according to the first reference position information and the first position information; and acquiring the predicted position information according to the relative distance and the second reference position information.
7. The apparatus of claim 5, wherein the matching cost acquisition module is further configured to acquire a matching cost determination area corresponding to the current pixel to be matched, with the current pixel to be matched as a center; acquiring an image area in which each pixel point in the matching cost determination area is positioned, and determining a matching cost weight of each pixel point in the matching cost determination area according to the image area; and determining the matching cost by utilizing the matching cost weight and the hamming distance.
8. The apparatus of claim 5, wherein the image segmentation algorithm is a meanshift segmentation algorithm.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 4 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 4.
CN202110440724.8A 2021-04-23 2021-04-23 Distance measuring method, distance measuring device, computer device, and storage medium Active CN113298869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110440724.8A CN113298869B (en) 2021-04-23 2021-04-23 Distance measuring method, distance measuring device, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110440724.8A CN113298869B (en) 2021-04-23 2021-04-23 Distance measuring method, distance measuring device, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN113298869A CN113298869A (en) 2021-08-24
CN113298869B true CN113298869B (en) 2023-08-04

Family

ID=77320119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110440724.8A Active CN113298869B (en) 2021-04-23 2021-04-23 Distance measuring method, distance measuring device, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN113298869B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951310A (en) * 2020-07-17 2020-11-17 深圳市帝普森微电子有限公司 Binocular stereo matching method, disparity map acquisition device and computer storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447885B (en) * 2014-07-11 2018-09-25 株式会社理光 The method and apparatus for calculating parallax
JP6550881B2 (en) * 2014-07-14 2019-07-31 株式会社リコー Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
CN108596116B (en) * 2018-04-27 2021-11-05 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
CN111105453B (en) * 2018-10-25 2023-05-02 上海理工大学 Method for obtaining disparity map
CN110096993A (en) * 2019-04-28 2019-08-06 深兰科技(上海)有限公司 The object detection apparatus and method of binocular stereo vision
CN111325778B (en) * 2020-01-22 2022-04-08 天津大学 Improved Census stereo matching algorithm based on window cross-correlation information
CN112395961B (en) * 2020-10-31 2022-08-09 太原理工大学 Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951310A (en) * 2020-07-17 2020-11-17 深圳市帝普森微电子有限公司 Binocular stereo matching method, disparity map acquisition device and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于sift的简化算法下图像快速匹配;裴聪;戴立玲;卢章平;;制造业自动化(第01期);132-135 *

Also Published As

Publication number Publication date
CN113298869A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN111402152B (en) Processing method and device of disparity map, computer equipment and storage medium
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
KR101032446B1 (en) Apparatus and method for detecting a vertex on the screen of a mobile terminal
CN109640066B (en) Method and device for generating high-precision dense depth image
CN107980138A (en) A kind of false-alarm obstacle detection method and device
CN111368717B (en) Line-of-sight determination method, line-of-sight determination device, electronic apparatus, and computer-readable storage medium
CN105069453A (en) Image correction method and apparatus
CN111105452B (en) Binocular vision-based high-low resolution fusion stereo matching method
JP4296617B2 (en) Image processing apparatus, image processing method, and recording medium
CN113298869B (en) Distance measuring method, distance measuring device, computer device, and storage medium
CN117291790A (en) SAR image registration method, SAR image registration device, SAR image registration equipment and SAR image registration medium
Stentoumis et al. Implementing an adaptive approach for dense stereo-matching
CN113724141B (en) Image correction method and device and electronic equipment
CN114415129A (en) Visual and millimeter wave radar combined calibration method and device based on polynomial model
JP2018032144A (en) Image processor, image processing method and program
US20240029288A1 (en) Image processing apparatus, image processing method, and storage medium
CN115908243B (en) Method, device, equipment and storage medium for dividing nondestructive testing image
CN111223139A (en) Target positioning method and terminal equipment
CN116485842A (en) Method, device and storage medium for automatic target recognition
CN116503387B (en) Image detection method, device, equipment, system and readable storage medium
US11282280B2 (en) Method and system for node vectorisation
CN111967342B (en) Method, device, electronic device and storage medium for setting plane parameters
CN110599504B (en) Image processing method and device
Shi et al. Practical Method of Low-Light-Level Binocular Ranging Based on Triangulation and Error Correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230412

Address after: Full Floor 14, Unit 3, Building 2, No. 11, Middle Spectra Road, Huangpu District, Guangzhou, Guangdong 510700

Applicant after: China Southern Power Grid Digital Grid Technology (Guangdong) Co.,Ltd.

Address before: Room 86, room 406, No.1, Yichuang street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou City, Guangdong Province

Applicant before: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant