CN108596012B - Barrier frame combining method, device and terminal - Google Patents

Barrier frame combining method, device and terminal Download PDF

Info

Publication number
CN108596012B
CN108596012B CN201810052738.0A CN201810052738A CN108596012B CN 108596012 B CN108596012 B CN 108596012B CN 201810052738 A CN201810052738 A CN 201810052738A CN 108596012 B CN108596012 B CN 108596012B
Authority
CN
China
Prior art keywords
obstacle
frame
parallax
obstacle frame
abnormal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810052738.0A
Other languages
Chinese (zh)
Other versions
CN108596012A (en
Inventor
冯谨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN201810052738.0A priority Critical patent/CN108596012B/en
Publication of CN108596012A publication Critical patent/CN108596012A/en
Application granted granted Critical
Publication of CN108596012B publication Critical patent/CN108596012B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device and a terminal for merging barrier frames, and relates to the technical field of auxiliary driving. If the number of the abnormal parallax points in the middle area is large, determining that the two barrier frames do not belong to the same barrier, and not combining; on the contrary, if the number of the abnormal parallax points in the middle area is small, the two obstacle frames are determined to belong to the same obstacle, and the two obstacle frames are combined to obtain the target obstacle frame, so that the accuracy of combining the obstacle frames is improved.

Description

Barrier frame combining method, device and terminal
Technical Field
The invention relates to the technical field of auxiliary driving, in particular to a barrier frame merging method, a barrier frame merging device and a terminal.
Background
When the obstacle detection is performed on the edge-based disparity map, the effective disparity of the obstacle in the edge-based disparity map is mainly concentrated on the edge pixel points of the object with the larger gradient, and the pixel points with the smaller row-direction gradient are usually invalid disparities, so that the effective disparity of the obstacle in the disparity map is too small, and the row-direction disparity is discontinuous. Since the same obstacle may be detected as a plurality of obstacle frames based on the disparity map of the edge with less effective disparity and discontinuous disparity, it is necessary to combine a plurality of preliminarily detected obstacle frames, so that a plurality of obstacle frames originally belonging to the same obstacle are combined into one obstacle frame, thereby obtaining an accurate obstacle detection result.
When merging the obstacle frames, first, distance thresholds in the Z direction and the X direction are preset respectively, and if the difference between the Z direction distance between the two obstacle frames and the difference between the X direction distance between the two obstacle frames are both smaller than the distance thresholds in the respective directions, the two obstacle frames are merged.
Disclosure of Invention
The invention provides a barrier frame merging method, a barrier frame merging device and a terminal, aiming at solving the problem that the existing barrier frame merging accuracy is not high.
In order to achieve the purpose of the invention, the invention provides the following technical scheme:
in a first aspect, the present invention provides a method for combining obstacle frames, the method comprising:
searching a second barrier frame which is overlapped with the first barrier frame in the row direction and is not overlapped with the first barrier frame in the column direction in the disparity map;
if the absolute value of the difference value of the Z-direction distance between the first obstacle frame and the second obstacle frame and the absolute value of the difference value of the X-direction distance between the first obstacle frame and the second obstacle frame in a space coordinate system are smaller than preset distance threshold values in respective directions, determining a middle area between the first obstacle frame and the second obstacle frame in the disparity map, and counting the number of abnormal disparity points in the middle area;
And if the number of the abnormal parallax points is smaller than a parallax number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame.
Optionally, before counting the number of abnormal parallax points in the intermediate region, the method further includes:
determining a parallax range of the middle area based on a parallax range corresponding to a first obstacle in the first obstacle frame and a parallax range corresponding to a second obstacle in the second obstacle frame;
and if the parallax value of the pixel point in the middle area does not fall into the parallax range of the middle area, determining the pixel point as an abnormal parallax point.
Optionally, the counting the number of abnormal parallax points in the middle region includes:
dividing the middle area into a preset number of sub-areas along the column direction;
counting the number of abnormal parallax points in the sub-area;
if the number of the abnormal parallax points is smaller than the parallax number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame, including:
if the number of the abnormal parallax points in the subarea is smaller than the parallax number threshold of the subarea, determining the subarea as a parallax normal subarea;
And if the number of the parallax normal subareas in the middle area reaches a preset normal subarea number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame.
Optionally, the parallax number threshold has a positive correlation with the number of pixels in the column of the area to which the abnormal parallax point belongs.
Optionally, the center point of the first obstacle frame is located within the range of the line where the second obstacle frame is located, and/or the center point of the second obstacle frame is located within the range of the line where the first obstacle frame is located.
In a second aspect, the present invention provides an obstacle frame combining apparatus, the apparatus comprising:
the search unit is used for searching a second barrier frame which is overlapped with the first barrier frame in the row direction and is not overlapped with the first barrier frame in the column direction in the disparity map;
a statistical unit, configured to determine a middle area located between the first obstacle frame and the second obstacle frame in the disparity map if an absolute value of a difference between Z-direction distances of the first obstacle frame and the second obstacle frame in a spatial coordinate system and an absolute value of a difference between X-direction distances of the first obstacle frame and the second obstacle frame are both smaller than preset distance thresholds in respective directions, and count the number of abnormal disparity points in the middle area;
And the merging unit is used for merging the first obstacle frame and the second obstacle frame to obtain a target obstacle frame if the number of the abnormal parallax points is smaller than a parallax number threshold.
In a third aspect, the present invention provides an obstacle box merge terminal comprising a camera assembly, a processor, and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to: the method for combining the barrier frames is realized.
In a fourth aspect, the present invention provides a machine-readable storage medium having stored thereon machine-executable instructions which, when executed by a processor, implement any one of the above-described method for obstacle frame merging.
As can be seen from the above description, the present invention determines whether to merge two obstacle frames in the spatial coordinate system that satisfy the preset distance threshold, based on the number of abnormal disparity points in the middle region between the obstacle frames in the disparity map. If the number of abnormal parallax points (for example, parallax values generated based on the background) in the middle area is large, determining that the two obstacle frames do not belong to the same obstacle, and not merging; on the contrary, if the number of the abnormal parallax points in the middle area is small, the two obstacle frames are determined to belong to the same obstacle, and the obstacle frames are merged, so that the accuracy of merging the obstacle frames is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is an edge-based disparity map of an application scene according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an initial obstacle frame detected based on the edge-based disparity map shown in FIG. 1;
FIG. 3 is a schematic diagram of an obstacle box obtained by merging the initial obstacle boxes shown in FIG. 2 according to the prior art;
FIG. 4 is a flow chart of a method for barrier box merging according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an obstacle frame detected based on a side-based disparity map according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an obstacle frame A and an obstacle frame D in a spatial coordinate system according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an obstacle frame a and an obstacle frame D in a disparity map according to an embodiment of the present invention;
fig. 8 is a disparity map including an obstacle frame a, an obstacle frame D, and a middle frame AD in an application scenario according to an embodiment of the present invention;
Fig. 9 is a disparity map including an obstacle frame a, an obstacle frame D, and a middle frame AD in another application scenario according to the embodiment of the present invention;
fig. 10 is a schematic diagram of an obstacle frame after the obstacle frame a and the obstacle frame D are combined in the application scenario shown in fig. 9;
FIG. 11 is a diagram illustrating sub-region partitioning according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of an obstacle frame based on the combination of the obstacle frames shown in FIG. 7 according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an obstacle frame merging terminal according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a structure of barrier box merging logic according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context.
Referring to fig. 1, an edge-based disparity map of an application scene according to an embodiment of the present invention is shown, and as can be seen from the edge-based disparity map, effective disparity values in the disparity map are mainly concentrated on edge pixel points of an object with a larger gradient, for example, pixel points on a human contour, while disparity values of pixel points with a smaller row-wise gradient are usually invalid disparity values, for example, disparity values of pixel points in a row-wise direction of a distant vehicle are basically invalid disparity values. If the number of pixels of the effective parallax value of the obstacle in the parallax image is too small and the number of pixels of the line-direction ineffective parallax value is too large, the same obstacle is detected as a plurality of obstacle frames when the obstacle frames are detected, and therefore, it is necessary to merge the obstacle frames preliminarily detected from the edge-based parallax image.
Fig. 2 is a schematic diagram of an initial obstacle frame detected based on the edge-based disparity map shown in fig. 1. In the prior art, whether to merge the obstacle frames is determined based on the Z-direction distance and the X-direction distance of the initial obstacle frame in the spatial coordinate system in fig. 2, for example, if the difference between the Z-direction distance and the X-direction distance of the two obstacle frames in the middle of fig. 2 are both smaller than a preset distance threshold, the two obstacle frames are merged, so as to obtain the schematic diagram of the merged obstacle frame shown in fig. 3. However, as can be seen from fig. 2, the two middle obstacle frames should not be merged, that is, the accuracy of merging the obstacle frames based on only the Z-direction distance and the X-direction distance is not high.
For the problem that the merging accuracy of the obstacle frame is not high, the present invention provides an obstacle frame merging method, referring to fig. 4, which is a flowchart of an embodiment of the obstacle frame merging method of the present invention, and the embodiment describes a process of merging the obstacle frame.
Step 401, searching for a second obstacle frame which is overlapped with the first obstacle frame in the row direction and is not overlapped with the first obstacle frame in the column direction in the disparity map.
The method marks the current barrier frame to be combined as a first barrier frame.
In the edge-based disparity map, the effective disparity values of the obstacles are mainly concentrated on the edge of an object with a large row-direction gradient, and the majority of the disparity values of pixel points with a small row-direction gradient are invalid disparity values, that is, the obstacles have the characteristics of continuous column-direction disparity values and discontinuous row-direction disparity values, so that when a second obstacle frame which may be merged with a first obstacle frame in the disparity map is searched, the second obstacle frame should be overlapped with the first obstacle frame in the row direction and not overlapped in the column direction.
Optionally, the coincidence degree of the second obstacle frame and the first obstacle frame is limited in the present invention, specifically, the center point of the second obstacle frame is located in the range where the first obstacle frame is located, and/or the center point of the first obstacle frame is located in the range where the second obstacle frame is located, so as to increase the possibility that the two obstacle frames are the same obstacle, and improve the accuracy of the combination.
For example, referring to fig. 5, a schematic diagram of an obstacle frame detected based on an edge-based disparity map according to an embodiment of the present invention includes an obstacle frame a to an obstacle frame D, where a black dot in the obstacle frame is a central point, if the obstacle frame a is a first obstacle frame, the obstacle frame is denoted as reca (x, y, width, height), and a searched obstacle frame is denoted as rectX (x, y, width, height), where x is an abscissa of the obstacle frame (top left vertex) on an x axis in the disparity map; y is the ordinate of the barrier frame (top left vertex) on the y-axis in the disparity map; the width is the width value of the barrier frame, and the unit is the number of pixels; height is the height value of the barrier frame, and the unit is the number of pixels.
Taking the search to the right as an example, the obstacle frame located at the right side of the obstacle frame a satisfies the following formula:
rectA.x + rectA.width < rectX.x equation (1)
Wherein, recta.x represents the abscissa of the barrier frame a; width represents the width value of the barrier frame A; x represents the abscissa of the obstacle frame X.
If the coincidence degree of the obstacle frame a and the obstacle frame X satisfies the following formula (2) and/or formula (3), the obstacle frame X is the searched second obstacle frame. In particular to a method for preparing a high-performance nano-silver alloy,
Figure BDA0001552824010000071
Figure BDA0001552824010000072
wherein reca.y represents the ordinate of the barrier frame a; height represents the height value of the obstacle box a; rectx.y represents the ordinate of the obstacle frame X; height represents the height value of the obstacle box X. Formula (2) shows that the center point of the obstacle frame a is located within the range of the line of the obstacle frame X; equation (3) indicates that the center point of the obstacle frame X is located within the range of the row in which the obstacle frame a is located.
Based on the above formula, the obstacle frame D and the obstacle frame a in fig. 5 satisfy the overlap ratio requirement, and therefore the obstacle frame D is the second obstacle frame.
Step 402, if the absolute value of the difference between the distances in the Z direction of the first obstacle frame and the second obstacle frame in the spatial coordinate system and the absolute value of the difference between the distances in the X direction are both smaller than the preset distance thresholds in the respective directions, determining a middle area between the first obstacle frame and the second obstacle frame in the disparity map, and counting the number of abnormal disparity points in the middle area.
Calculating the Z-direction distance of the first obstacle frame in the space coordinate system based on the parallax range of the first obstacle in the first obstacle frame (when the initial obstacle frame detection is carried out based on the side-based parallax map, the information of the detected obstacle frame is acquired and comprises the coordinates of the obstacle frame and the parallax range of the obstacle in the obstacle frame); based on the parallax range of the second obstacle in the second obstacle frame, a Z-direction distance of the second obstacle frame in a space coordinate system with the camera as a coordinate origin is calculated, the Z-direction distance being a depth of field generally referred to as a distance from the camera in the Z-direction, that is, an actual distance from the camera in the Z-direction.
If the absolute value of the difference value of the distances in the Z direction between the first obstacle frame and the second obstacle frame is smaller than the preset distance threshold in the Z direction, it indicates that the distance between the first obstacle frame and the second obstacle frame in the Z direction is short, and the possibility of being the same obstacle is high.
The X-direction distance between the first obstacle frame and the second obstacle frame is calculated, which may be specifically referred to as the following formula:
Figure BDA0001552824010000081
wherein, ZqA coordinate (i.e. a Z-direction distance) representing a Z direction of a q point in space in a space coordinate system; f is the focal length of the camera; p is the actual size of a pixel point; pixel qThe abscissa of the point q on the x axis in the disparity map is; x is a radical of a fluorine atom0The abscissa of the central point of the disparity map on the x axis is taken as the coordinate; xqRepresenting the X-direction coordinates (i.e., the X-direction distance) of the q point in space in a spatial coordinate system.
The distance in the X direction between the first obstacle frame and the second obstacle frame can be calculated through the formula (4), and if the absolute value of the difference value of the distance in the X direction between the first obstacle frame and the second obstacle frame is smaller than the preset distance threshold value in the X direction, it is indicated that the distance between the first obstacle frame and the second obstacle frame in the X direction is short, and the first obstacle frame and the second obstacle frame are likely to be the same obstacle.
For example, referring to fig. 6, in order to illustrate a schematic diagram of an obstacle frame a and an obstacle frame D in a spatial coordinate system (a Z direction is not illustrated) according to an embodiment of the present invention, an absolute value of a difference value of X direction distances between the obstacle frame a and the obstacle frame D may be represented by the following formula:
Δ X ═ Xa-Xd | equation (5)
Wherein XaThe distance in the X direction of the right boundary of the barrier frame A; xdAn X-direction distance of the left boundary of the obstacle frame D; Δ X is the absolute value of the difference in the X-direction distance between the obstacle frame a and the obstacle frame D. If Δ X is smaller than the preset distance threshold in the X direction, it indicates that the distance between the obstacle frame a and the obstacle frame D in the X direction is relatively short, and the distance may be smaller Are the same obstacle.
In the invention, if the absolute value of the difference between the Z-direction distances of the first obstacle frame and the second obstacle frame and the absolute value of the difference between the X-direction distances of the first obstacle frame and the second obstacle frame are both smaller than the distance threshold values in the respective directions, the first obstacle frame and the second obstacle frame are preliminarily judged to be possibly the same obstacle, namely, the distances between the Z direction and the X direction of the two obstacle frames belonging to the same obstacle are not greatly different.
After the first obstacle frame and the second obstacle frame are preliminarily judged to be possibly the same obstacle, a middle area between the first obstacle frame and the second obstacle frame in the disparity map is determined. Still taking the obstacle frame a and the obstacle frame D as an example, referring to fig. 7, which is a schematic diagram of the obstacle frame a and the obstacle frame D in the disparity map shown in the embodiment of the present invention, wherein a shaded area between the obstacle frame a and the obstacle frame D is a middle area, which is denoted as a middle frame AD, and coordinates recad (x, y, width, height) of the middle frame AD can be obtained by the following formula:
Figure BDA0001552824010000091
wherein, recta.x represents the abscissa of the obstacle frame a; width represents the width value of the barrier frame A; recta.y represents the ordinate of the obstacle frame a; rect d.y represents the ordinate of the obstacle frame D; x represents the abscissa of the obstacle frame D; height represents the height value of the obstacle box a; height represents the height value of the obstacle box D; rectAD.x represents the abscissa of the middle frame AD; recad.y represents the ordinate of the middle frame AD; rectAD.width represents the width value of the middle frame AD; height represents the height value of the middle box AD.
After the intermediate region between the first obstacle frame and the second obstacle frame is determined, the number of abnormal parallax points in the intermediate region is counted. Specifically, first, the parallax range of the middle area is determined based on the parallax range corresponding to the first obstacle in the first obstacle frame and the parallax range corresponding to the second obstacle in the second obstacle frame. Still taking the middle frame AD in fig. 7 as an example, if the obstacle in the obstacle frame a and the obstacle in the obstacle frame D are the same obstacle, the parallax range of the middle frame AD should be:
Figure BDA0001552824010000092
wherein, A.minDisp represents the minimum parallax value of the obstacle in the obstacle frame A; minDisp represents the minimum parallax value of the obstacle in the obstacle frame D; maxdisp represents the maximum disparity value for the obstacle in obstacle box a; maxdisp represents the maximum disparity value of the obstacle in obstacle frame D; mindisp represents the minimum disparity value in the disparity range of the middle frame AD; maxdisp represents the maximum disparity value in the disparity range of the middle frame AD.
And if the parallax value of the pixel point in the middle area does not fall into the determined parallax range of the middle area, determining the pixel point as an abnormal parallax point, and counting the number of the abnormal parallax points in the middle area.
Referring to fig. 8, it can be seen that, in an application scene shown in the disparity map of the invention, an obstacle in the obstacle frame a and an obstacle in the obstacle frame D do not belong to the same obstacle, and the intermediate frame AD located between the obstacle frame a and the obstacle frame D includes not only an invalid disparity value (black represents the invalid disparity value), but also a large number of disparity values generated by the background, for example, disparity values of zebra line positions, and pixel points corresponding to disparity values that do not belong to the disparity range of the obstacle in the obstacle frame a or the disparity range of the obstacle in the obstacle frame B are abnormal disparity points, that is, a large number of abnormal disparity points are generally included between two obstacle frames that do not belong to the same obstacle.
Referring to fig. 9, it can be seen from the disparity map that in another application scenario shown in the embodiment of the present invention, an obstacle in the obstacle frame a and an obstacle in the obstacle frame D belong to the same obstacle (vehicle), and the intermediate frame AD located between the obstacle frame a and the obstacle frame D has substantially no abnormal disparity point, that is, two obstacle frames belonging to the same obstacle usually do not include or include a very small number of abnormal disparity points.
And step 403, if the number of the abnormal parallax points is smaller than the parallax number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame.
The invention can preset the parallax quantity threshold value of the middle area.
Optionally, because the obstacle has a characteristic of being large and small, a middle area between the corresponding obstacle frames also has a characteristic of being large and small, that is, the number of pixels in the middle area with a longer distance is smaller, the number of pixels in the middle area with a shorter distance is larger, and because the number of pixels with an effective parallax value in the parallax map based on the edges is smaller, the parallax number threshold of the middle area is set to be in a positive correlation with the number of pixels in the middle area in the column direction, and specifically, the parallax number threshold of the middle area can be set to be half of the number of pixels in the column direction in the middle area.
If the number of the abnormal parallax points in the middle area is smaller than the parallax number threshold of the middle area, it is indicated that the number of the abnormal parallax points between the first obstacle frame and the second obstacle frame (which may be caused by a matching error) is very small, and the probability that the first obstacle in the first obstacle frame and the second obstacle in the second obstacle frame are the same obstacle is very high, so that the first obstacle frame and the second obstacle frame are combined to obtain the target obstacle frame. For example, since the number of abnormal parallax points is very small in the intermediate frame AD shown in fig. 9, the obstacle frame a and the obstacle frame D should be combined to obtain a combined obstacle frame as shown in fig. 10.
If the number of the abnormal parallax points in the middle area is larger than or equal to the parallax number threshold value of the middle area, it is indicated that a large number of abnormal parallax points exist between the first obstacle frame and the second obstacle frame, the first obstacle in the first obstacle frame and the second obstacle in the second obstacle frame do not belong to the same obstacle, and the first obstacle frame and the second obstacle frame are forbidden to be combined. For example, the intermediate frame AD shown in fig. 8 has many abnormal parallax points, and therefore the obstacle frame a and the obstacle frame D cannot be merged.
Optionally, the middle area may be divided into sub-areas, and whether the first barrier frame and the second barrier frame need to be merged is determined based on the sub-areas. Specifically, the middle area is divided into a preset number of sub-areas along the column direction, the number of abnormal parallax points in the sub-areas is counted, and if the number of the abnormal parallax points in the sub-areas is smaller than a parallax number threshold of the sub-areas (optionally, the parallax number threshold of the sub-areas and the number of pixels in the column direction of the sub-areas are in a positive correlation relationship, specifically, the number of the pixels in the column direction of the sub-areas is half of the number of the pixels in the column direction of the sub-areas), the sub-areas are determined to be normal parallax sub-areas; counting the number of parallax normal subregions in the middle region, and if the number of parallax normal subregions reaches a preset normal subregion number threshold, combining the first barrier frame and the second barrier frame to obtain a target barrier frame; otherwise, the first obstacle frame and the second obstacle frame are prohibited from being combined.
Referring to fig. 11, which is a schematic view of sub-region division according to an embodiment of the present invention, a middle frame AD is divided into 3 sub-regions, namely, a sub-region 1, a sub-region 2, and a sub-region 3, by using a dotted line, and the number of abnormal parallax points in each sub-region is counted, as shown in table 1.
Figure BDA0001552824010000111
TABLE 1
The parallax number threshold of the preset subareas is 50, and the number threshold of the preset normal subareas is 2. As can be seen from table 1, the number (100) of abnormal disparity points in subregion 1 is greater than the disparity number threshold (50), and therefore subregion 1 is a disparity abnormal subregion; the number of abnormal parallax points (40) in the subarea 2 is smaller than the parallax number threshold (50), and therefore the subarea 2 is a parallax normal subarea; the number of abnormal parallax points (60) in the subarea 3 is greater than the parallax number threshold (50), and therefore the subarea 3 is a parallax abnormal subarea. Counting the number of the parallax normal sub-regions in the middle frame AD to be 1, wherein the number of the parallax normal sub-regions is smaller than a preset normal sub-region number threshold (2), and therefore, determining that the obstacles in the obstacle frame A and the obstacles in the obstacle frame D do not belong to the same obstacle, and the obstacles cannot be combined.
According to the method, the condition that the abnormal parallax points are concentrated in a certain local area is eliminated through subarea division, so that two obstacle frames are combined based on the most occupied normal subareas of the parallax.
In addition, as can be seen from fig. 11, due to the posture of the obstacle, the inside of the obstacle frame is not all the obstacle, but also includes a part of the background, and the parallax values of the pixels corresponding to these backgrounds do not belong to the parallax range of the obstacle, so the pixels corresponding to the background are abnormal parallax points, and when performing the abnormal parallax point statistics, the abnormal parallax points in the obstacle frame can be calculated into the abnormal parallax points, specifically, the pixels are scanned line by line in the obstacle frame from the boundary between the middle area (middle frame AD) and the obstacle frame, and if the parallax values of the pixels do not fall into the parallax range of the obstacle, the statistics (plus 1) is performed until the first parallax value falling into the parallax range of the obstacle is scanned, and the current line scanning is stopped. By counting the abnormal parallax points in the barrier frames, the accuracy of judging whether the two barrier frames can be combined or not can be improved.
Taking the obstacle frame a and the obstacle frame D shown in fig. 7 as an example, if the obstacle frame a and the obstacle frame D can be merged, the merged new obstacle frame Obs is shown in fig. 12, where the dashed boxes in the obstacle frame Obs respectively represent the obstacle frame a and the obstacle frame D before merging, and the coordinates of the obstacle frame Obs are rectObs (x, y, width, height) and can be obtained by the following formula:
Figure BDA0001552824010000121
Wherein, recta.x represents the abscissa of the obstacle frame a; rectD.x represents the abscissa of the obstacle frame D; recta.y represents the ordinate of the obstacle frame a; rectD.y represents the ordinate of the barrier frame D; width represents the width value of the barrier frame A; retD.width represents the width value of the barrier frame D; height represents the height value of the obstacle box a; height represents the height value of the obstacle box D; rectObs.x represents the abscissa of the obstacle frame Obs; rectObs.y represents the ordinate of the obstacle frame Obs; width represents the width value of the obstacle frame Obs; height represents the height value of the obstacle box Obs.
As can be seen from the above description, the present invention determines whether to merge two obstacle frames that satisfy a distance threshold in a spatial coordinate system based on the number of abnormal parallax points in the middle region between the obstacle frames in a parallax map, and determines that the two obstacle frames do not belong to the same obstacle and do not merge if the number of abnormal parallax points in the middle region is large; on the contrary, if the number of the abnormal parallax points in the middle area is small, the two obstacle frames are determined to belong to the same obstacle, and the obstacle frames are merged, so that the accuracy of merging the obstacle frames is improved.
Referring to an edge-based disparity map in an application scene shown in fig. 1, an initial obstacle frame shown in fig. 2 can be obtained by performing obstacle frame detection on the edge-based disparity map. If a larger distance threshold is set by using the prior art, a merged obstacle frame as shown in fig. 3 can be obtained, that is, the two middle obstacle frames are considered to be the same obstacle and merged because they satisfy the distance threshold. However, as is apparent from fig. 2, the two obstacle frames do not belong to the same obstacle and should not be merged, and therefore, the accuracy of merging the existing obstacle frames is not high.
However, as shown in fig. 8 and 11, the present invention counts the number of abnormal parallax points for two obstacle frames that satisfy the distance threshold based on the intermediate area (intermediate frame AD) between the two obstacle frames, and if the number of abnormal parallax points is small, it is considered that the two obstacle frames belong to the same obstacle and are merged; if the number of the abnormal parallax points is large, it indicates that a large amount of background information (for example, a zebra crossing between the obstacle frame a and the obstacle frame D in fig. 8 and 11) exists between the two obstacle frames, and therefore, it is determined that the two obstacle frames do not belong to the same obstacle, and are not merged, thereby improving the accuracy of merging the obstacle frames.
Fig. 13 is a schematic diagram of a hardware structure of the barrier frame merging terminal according to the present invention. The terminal 13 includes a processor 1301, a machine-readable storage medium 1302 having machine-executable instructions stored thereon, and a camera assembly 1304. The processor 1301 and the machine-readable storage medium 1302 may communicate via a system bus 1303, among other things. Also, processor 1301 may perform the obstacle box merging method described above by reading and executing machine executable instructions in machine readable storage medium 1302 that correspond to the obstacle box merging logic.
The machine-readable storage medium 1302 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: RAM (random Access Memory), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Camera subassembly 1304 is used for gathering the image, and this camera subassembly 1304 can include two at least cameras, and these two at least cameras can be left camera and the right camera of binocular camera respectively.
As shown in fig. 14, functionally divided, the above-mentioned obstacle box merging logic may include a search unit 1401, a statistical unit 1402, and a merging unit 1403, where:
a search unit 1401 for searching for a second obstacle frame that does not overlap the first obstacle frame in the parallax map in the row direction and the column direction;
a counting unit 1402, configured to determine a middle area located between the first obstacle frame and the second obstacle frame in the disparity map if an absolute value of a difference between Z-direction distances of the first obstacle frame and the second obstacle frame in a spatial coordinate system and an absolute value of a difference between X-direction distances of the first obstacle frame and the second obstacle frame in the spatial coordinate system are both smaller than preset distance thresholds in respective directions, and count the number of abnormal disparity points in the middle area;
a merging unit 1403, configured to merge the first obstacle frame and the second obstacle frame to obtain a target obstacle frame if the number of the abnormal parallax points is smaller than a parallax number threshold.
Optionally, the barrier box merging logic further comprises:
a determining unit, configured to determine a parallax range of the middle area based on a parallax range corresponding to a first obstacle in the first obstacle frame and a parallax range corresponding to a second obstacle in the second obstacle frame; and if the parallax value of the pixel point in the middle area does not fall into the parallax range of the middle area, determining the pixel point as an abnormal parallax point.
In the alternative,
the statistical unit 1402 is specifically configured to divide the middle area into a preset number of sub-areas along a column direction; counting the number of abnormal parallax points in the sub-area;
the merging unit 1403 is specifically configured to determine that the sub-region is a normal sub-region if the number of the abnormal parallax points in the sub-region is smaller than the parallax number threshold of the sub-region; and if the number of the parallax normal sub-areas in the middle area reaches a preset normal sub-area number threshold, combining the first barrier frame and the second barrier frame to obtain a target barrier frame.
Optionally, the parallax number threshold has a positive correlation with the number of pixels in the column of the area to which the abnormal parallax point belongs.
Optionally, the center point of the first obstacle frame is located within the range of the row in which the second obstacle frame is located, and/or the center point of the second obstacle frame is located within the range of the row in which the first obstacle frame is located.
The present invention also provides a machine-readable storage medium, such as machine-readable storage medium 1302 in fig. 13, comprising machine-executable instructions that are executable by processor 1301 in a vehicle identification terminal to implement the barrier box merging method described above.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. An obstacle frame merging method, the method comprising:
searching a second barrier frame which is independent from the first barrier frame in the disparity map, is overlapped with the first barrier frame in the row direction and is not overlapped with the first barrier frame in the column direction; the center point of the first obstacle frame is positioned in the range of the line of the second obstacle frame, and/or the center point of the second obstacle frame is positioned in the range of the line of the first obstacle frame;
if the absolute value of the difference between the distances in the Z direction and the absolute value of the difference between the distances in the X direction of the first obstacle frame and the second obstacle frame in the spatial coordinate system are both smaller than the preset distance threshold in the respective directions, determining a middle area between the first obstacle frame and the second obstacle frame in the disparity map, and determining a disparity range of the middle area based on a disparity range corresponding to a first obstacle in the first obstacle frame and a disparity range corresponding to a second obstacle in the second obstacle frame under the condition that the first obstacle frame and the second obstacle frame are assumed to be the same obstacle; if the parallax value of the pixel point in the middle area does not fall into the parallax range of the middle area, determining the pixel point as an abnormal parallax point; counting the number of abnormal parallax points in the middle area;
And if the number of the abnormal parallax points is smaller than a parallax number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame.
2. The method of claim 1, wherein counting the number of abnormal parallax points in the intermediate region comprises:
dividing the middle area into a preset number of sub-areas along the column direction;
counting the number of abnormal parallax points in the sub-area;
if the number of the abnormal parallax points is smaller than the parallax number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame, including:
if the number of the abnormal parallax points in the subarea is smaller than the parallax number threshold of the subarea, determining the subarea as a parallax normal subarea;
and if the number of the parallax normal subareas in the middle area reaches a preset normal subarea number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame.
3. The method according to any one of claims 1 to 2, wherein the disparity number threshold has a positive correlation with the number of pixels in the column of the region to which the abnormal disparity point belongs.
4. An obstacle frame merging apparatus, comprising:
the search unit is used for searching a second barrier frame which is independent from the first barrier frame in the disparity map, is overlapped with the first barrier frame in the row direction and is not overlapped with the first barrier frame in the column direction; the center point of the first obstacle frame is positioned in the range of the line of the second obstacle frame, and/or the center point of the second obstacle frame is positioned in the range of the line of the first obstacle frame;
a statistical unit, configured to determine a middle area located between the first obstacle frame and the second obstacle frame in the disparity map if an absolute value of a difference between Z-direction distances of the first obstacle frame and the second obstacle frame in a spatial coordinate system and an absolute value of a difference between X-direction distances of the first obstacle frame and the second obstacle frame are both smaller than preset distance thresholds in respective directions, and determine a disparity range of the middle area based on a disparity range corresponding to a first obstacle in the first obstacle frame and a disparity range corresponding to a second obstacle in the second obstacle frame under the assumption that the first obstacle frame and the second obstacle frame are the same obstacle; if the parallax value of the pixel point in the middle area does not fall into the parallax range of the middle area, determining the pixel point as an abnormal parallax point; counting the number of abnormal parallax points in the middle area;
And the merging unit is used for merging the first obstacle frame and the second obstacle frame to obtain a target obstacle frame if the number of the abnormal parallax points is smaller than a parallax number threshold.
5. The apparatus of claim 4,
the statistical unit is specifically configured to divide the middle region into a preset number of sub-regions along a column direction; counting the number of abnormal parallax points in the sub-area;
the merging unit is specifically configured to determine that the sub-region is a normal-parallax sub-region if the number of abnormal parallax points in the sub-region is smaller than the parallax number threshold of the sub-region; and if the number of the parallax normal subareas in the middle area reaches a preset normal subarea number threshold, combining the first obstacle frame and the second obstacle frame to obtain a target obstacle frame.
6. An obstacle-box merge terminal comprising a camera assembly, a processor, and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to: carrying out the method steps of any one of claims 1 to 3.
7. A machine-readable storage medium having stored thereon machine-executable instructions which, when executed by a processor, perform the method steps of any one of claims 1-3.
CN201810052738.0A 2018-01-19 2018-01-19 Barrier frame combining method, device and terminal Active CN108596012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810052738.0A CN108596012B (en) 2018-01-19 2018-01-19 Barrier frame combining method, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810052738.0A CN108596012B (en) 2018-01-19 2018-01-19 Barrier frame combining method, device and terminal

Publications (2)

Publication Number Publication Date
CN108596012A CN108596012A (en) 2018-09-28
CN108596012B true CN108596012B (en) 2022-07-15

Family

ID=63608500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810052738.0A Active CN108596012B (en) 2018-01-19 2018-01-19 Barrier frame combining method, device and terminal

Country Status (1)

Country Link
CN (1) CN108596012B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036210B (en) * 2019-06-03 2024-03-08 杭州海康机器人股份有限公司 Method and device for detecting obstacle, storage medium and mobile robot
CN113496146A (en) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium
US11560159B2 (en) * 2020-03-25 2023-01-24 Baidu Usa Llc Group and combine obstacles for autonomous driving vehicles
CN113378628B (en) * 2021-04-27 2023-04-14 阿里云计算有限公司 Road obstacle area detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123722A (en) * 2011-11-18 2013-05-29 株式会社理光 Road object detection method and system
CN105513064A (en) * 2015-12-03 2016-04-20 浙江万里学院 Image segmentation and adaptive weighting-based stereo matching method
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI387862B (en) * 2009-11-27 2013-03-01 Micro Star Int Co Ltd Moving devices and controlling methods therefor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123722A (en) * 2011-11-18 2013-05-29 株式会社理光 Road object detection method and system
CN105513064A (en) * 2015-12-03 2016-04-20 浙江万里学院 Image segmentation and adaptive weighting-based stereo matching method
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles

Also Published As

Publication number Publication date
CN108596012A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108596012B (en) Barrier frame combining method, device and terminal
US9438877B2 (en) Disparity calculating device and distance calculating device
JP5870273B2 (en) Object detection apparatus, object detection method, and program
US9237326B2 (en) Imaging system and method
US10659762B2 (en) Stereo camera
CN105069804B (en) Threedimensional model scan rebuilding method based on smart mobile phone
US20150367781A1 (en) Lane boundary estimation device and lane boundary estimation method
CN111340749B (en) Image quality detection method, device, equipment and storage medium
CN107980138A (en) A kind of false-alarm obstacle detection method and device
JP3729025B2 (en) Pedestrian detection device
US20170270680A1 (en) Method for Determining Depth Maps from Stereo Images with Improved Depth Resolution in a Range
BR112016010089A2 (en) moving body position estimation device and moving body position estimation method
CN112215794B (en) Method and device for detecting dirt of binocular ADAS camera
US11889047B2 (en) Image processing device and image processing method
CN107977649B (en) Obstacle identification method and device and terminal
CN108399360A (en) A kind of continuous type obstacle detection method, device and terminal
CN108647579B (en) Obstacle detection method and device and terminal
CN108256510B (en) Road edge line detection method and device and terminal
Wang et al. Robust obstacle detection based on a novel disparity calculation method and G-disparity
CN108319910B (en) Vehicle identification method and device and terminal
JP7293100B2 (en) camera system
CN107330932B (en) Method and device for repairing noise in parallax map
WO2020036039A1 (en) Stereo camera device
US11460582B2 (en) Vehicle exterior environment monoscopic and stereoscopic based detection apparatus
US11842552B2 (en) Vehicle exterior environment recognition apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant