CN107977649B - Obstacle identification method and device and terminal - Google Patents

Obstacle identification method and device and terminal Download PDF

Info

Publication number
CN107977649B
CN107977649B CN201711389258.5A CN201711389258A CN107977649B CN 107977649 B CN107977649 B CN 107977649B CN 201711389258 A CN201711389258 A CN 201711389258A CN 107977649 B CN107977649 B CN 107977649B
Authority
CN
China
Prior art keywords
detection window
obstacle
image
pixel
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711389258.5A
Other languages
Chinese (zh)
Other versions
CN107977649A (en
Inventor
曲磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN201711389258.5A priority Critical patent/CN107977649B/en
Publication of CN107977649A publication Critical patent/CN107977649A/en
Priority to PCT/CN2018/091638 priority patent/WO2019119752A1/en
Application granted granted Critical
Publication of CN107977649B publication Critical patent/CN107977649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The embodiment of the invention provides a method, a device and a terminal for identifying obstacles, which relate to the technical field of auxiliary driving, and the method comprises the following steps: extracting oblique lines from a V disparity map of an image to be recognized; determining a first starting pixel on an oblique line, wherein the sum of the numerical values of pixels in a pixel set corresponding to the first starting pixel is greater than a first threshold value, and the pixel set corresponding to the first starting pixel comprises pixels located above the oblique line in a column where the first starting pixel is located; determining a detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and a window parameter corresponding to a preset obstacle type; and determining the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window. The method is used for improving the accuracy of obstacle identification.

Description

Obstacle identification method and device and terminal
Technical Field
The embodiment of the invention relates to the technical field of auxiliary driving, in particular to a method, a device and a terminal for identifying obstacles.
Background
In the image recognition technology, image information of high resolution and depth information of an image can be acquired by the binocular stereoscopic vision technology, and thus the binocular stereoscopic vision technology is widely used in the image recognition.
In the process of identifying the obstacle based on the binocular stereo vision, a disparity map of an image to be identified is generally obtained by adopting the binocular stereo vision technology, and the obstacle is identified according to lines in the disparity map. In the disparity map, the edge lines of the obstacles can be normally preserved, while the parts of the homogeneous weak texture usually disappear, and a situation that the continuous envelope of the obstacles in the disparity map can be divided into discontinuous segments occurs. For example, for a bus in the image, the whole line of the bus in the disparity map is divided into two discontinuous parts (a head line and a tail line), and in the identification process, the head and the tail of the bus are often determined as two objects, so that the identification of the bus in the image fails.
As described above, in the related art, since the envelope line of the same obstacle in the disparity map may be discontinuous, when the obstacle is identified from the line in the disparity map, the same obstacle may be identified as a plurality of obstacles, which results in a low accuracy of obstacle identification.
Disclosure of Invention
In order to solve the problem that the same obstacle is recognized into a plurality of obstacles during obstacle recognition due to discontinuity of envelope lines of the same obstacle in a disparity map, embodiments of the present invention provide an obstacle recognition method, an apparatus, and a terminal, which improve accuracy of obstacle recognition.
In a first aspect, an embodiment of the present invention provides an obstacle identification method, including:
extracting oblique lines from a V disparity map of an image to be recognized;
determining a first starting pixel on the inclined line, wherein the sum of the numerical values of pixels in a pixel set corresponding to the first starting pixel is greater than a first threshold value, and the pixel set corresponding to the first starting pixel comprises pixels located above the inclined line in a column where the first starting pixel is located;
determining a detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and a window parameter corresponding to a preset obstacle type;
and determining the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window.
In one possible embodiment, determining the first start pixel on the diagonal includes:
determining pixels on the oblique line as first pixels in sequence along the extension direction of the oblique line from one end of the oblique line, and obtaining the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels until the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels determined and obtained on the oblique line is greater than the first threshold value, and determining the first pixels as the first starting pixels;
the pixel set corresponding to the first pixel comprises pixels which are positioned above the oblique line in the column where the first pixel is positioned.
In another possible implementation, the preset obstacle type includes a first obstacle type, and the first obstacle type corresponds to a first window parameter; determining a first detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and the first window parameter, wherein the determining comprises:
acquiring a first window parameter corresponding to the first obstacle type;
determining the size of the first detection window according to the identification direction of the obstacle, the parallax of the first starting pixel, the first window parameter and the camera parameter for shooting the image to be identified;
and determining the first detection window in the V disparity map according to the size of the first detection window, wherein the first starting pixel is a pixel at one corner of the first detection window.
In another possible implementation, determining the first detection window in the V-disparity map according to the size of the first detection window includes:
determining a position of the first start pixel in the first detection window according to the obstacle identification direction;
and determining the first detection window in the V disparity map according to the position of the first starting pixel in the first detection window and the size of the first detection window.
In another possible implementation manner, determining the type of the obstacle identified in the image to be identified according to the degree of identification of the image in the detection window includes:
detecting the image in the detection window according to the type of a preset obstacle corresponding to the detection window so as to obtain the recognition degree of the image in the detection window;
determining a target detection window according to the recognition degree of the image in the detection window, wherein the recognition degree of the image in the target detection window is the highest;
and determining the type of the obstacle corresponding to the target window as the type of the obstacle identified in the image to be identified.
In another possible embodiment, the detection window comprises a second detection window, the second detection window corresponding to a second obstacle type; detecting the image in the second detection window according to the second obstacle type to acquire the recognition degree of the image in the second detection window, including:
acquiring a standard image corresponding to the second obstacle type;
determining the similarity between the image in the second detection window and the standard image;
and determining the similarity as the recognition degree of the image in the second detection window.
In another possible embodiment, the oblique line includes a straight line where a lower edge of an obstacle in the V-disparity map is located; alternatively, the first and second electrodes may be,
the oblique line includes a straight line where a lower edge of the obstacle in the V-disparity map is located, and a portion in an upper edge line and/or a lower edge line of the V-disparity map.
In a second aspect, an embodiment of the present invention provides an obstacle identification apparatus, including an extraction module, a first determination module, a second determination module, and a third determination module, wherein,
the extraction module is used for extracting oblique lines from a V parallax image of an image to be identified;
the first determining module is configured to determine a first starting pixel on the diagonal line, where a sum of values of pixels in a pixel set corresponding to the first starting pixel is greater than a first threshold, and the pixel set corresponding to the first starting pixel includes a pixel located above the diagonal line in a column where the first starting pixel is located;
the second determining module is used for determining a detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and a window parameter corresponding to a preset obstacle type;
and the third determining module is used for determining the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window.
In a possible implementation manner, the first determining module is specifically configured to:
determining pixels on the oblique line as first pixels in sequence along the extension direction of the oblique line from one end of the oblique line, and obtaining the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels until the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels determined and obtained on the oblique line is greater than the first threshold value, and determining the first pixels as the first starting pixels;
the pixel set corresponding to the first pixel comprises pixels which are positioned above the oblique line in the column where the first pixel is positioned.
In another possible implementation, the preset obstacle type includes a first obstacle type, and the first obstacle type corresponds to a first window parameter; the second determining module is specifically configured to:
acquiring a first window parameter corresponding to the first obstacle type;
determining the size of the first detection window according to the identification direction of the obstacle, the parallax of the first starting pixel, the first window parameter and the camera parameter for shooting the image to be identified;
and determining the first detection window in the V disparity map according to the size of the first detection window, wherein the first starting pixel is a pixel at one corner of the first detection window.
In another possible implementation manner, the second determining module is specifically configured to:
determining a position of the first start pixel in the first detection window according to the obstacle identification direction;
and determining the first detection window in the V disparity map according to the position of the first starting pixel in the first detection window and the size of the first detection window.
In another possible implementation manner, the third determining module is specifically configured to:
detecting the image in the detection window according to the type of a preset obstacle corresponding to the detection window so as to obtain the recognition degree of the image in the detection window;
determining a target detection window according to the recognition degree of the image in the detection window, wherein the recognition degree of the image in the target detection window is the highest;
and determining the type of the obstacle corresponding to the target window as the type of the obstacle identified in the image to be identified.
In another possible embodiment, the detection window comprises a second detection window, the second detection window corresponding to a second obstacle type; the third determining module is specifically configured to:
acquiring a standard image corresponding to the second obstacle type;
determining the similarity between the image in the second detection window and the standard image;
and determining the similarity as the recognition degree of the image in the second detection window.
In another possible embodiment, the oblique line includes a straight line where a lower edge of an obstacle in the V-disparity map is located; alternatively, the first and second electrodes may be,
the oblique line includes a straight line where a lower edge of the obstacle in the V-disparity map is located, and a portion in an upper edge line and/or a lower edge line of the V-disparity map.
In a third aspect, an embodiment of the present invention provides an obstacle identification terminal, which includes a processor, a memory, a camera assembly, and a communication bus, where the communication bus is used to implement connection between components, the memory is used to store program instructions, and the processor is used to read the program instructions in the memory and execute the method according to the program instructions in the memory.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium having stored thereon computer-executable instructions for causing a computer to perform any of the methods described above.
According to the obstacle identification method, the obstacle identification device and the obstacle identification terminal, when the obstacle identification needs to be carried out on the image to be identified, the oblique line is extracted from the V parallax image of the image to be identified, the first starting pixel is determined on the oblique line, the detection window corresponding to the first starting pixel is determined in the V parallax image according to the parallax of the first starting pixel and the window parameter corresponding to the preset obstacle type, and the obstacle type identified in the image to be identified is determined according to the identification degree of the image in the detection window.
In the above process, only one of the preset obstacle types is the obstacle type corresponding to the obstacle, so that only one of the obtained detection windows is determined as the window where the obstacle is located (hereinafter referred to as a correct detection window), and none of the other detection windows is determined as the window where the obstacle is located (hereinafter referred to as an incorrect detection window). Since the type of the preset obstacle corresponding to the wrong detection window is different from the type of the obstacle in the detection window, the degree of image recognition in the wrong detection window is usually lower than the preset threshold. And the type of the preset obstacle corresponding to the correct detection window is the same as the type of the obstacle in the detection window, and even if the obstacle envelope line in the V disparity map is partially missing, the image recognition degree in the correct detection window is still larger than the preset threshold value.
Therefore, even if the envelope line of the obstacle in the V-disparity map of the image to be recognized is partially missing, the recognition degree of the image in the correct detection window is much higher than that of the image in the wrong detection window, so that even if the envelope line of the obstacle in the V-disparity map of the image to be recognized is partially missing, the type of the obstacle can be accurately recognized in the image to be recognized, and the accuracy of obstacle recognition is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an obstacle identification method according to an embodiment of the present invention;
fig. 2 is a first schematic flow chart of an obstacle identification method according to an embodiment of the present invention;
FIG. 3A is a first schematic diagram of an oblique line provided by an embodiment of the present invention;
FIG. 3B is a schematic diagram of a diagonal line provided in the embodiment of the present invention;
FIG. 3C is a third schematic diagram of a slanted line provided in the embodiment of the present invention;
FIG. 3D is a fourth schematic diagram of a slanted line provided in the embodiment of the present invention;
FIG. 4 is a diagram illustrating a process of determining a first start pixel according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a method for determining a first detection window according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an obstacle identification process according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an obstacle identification device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an obstacle identification terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic view of an application scenario of the obstacle identification method according to the embodiment of the present invention. Referring to fig. 1, a V disparity map 101 of an image to be recognized and a recognition model 102 are included. The image to be recognized may include a plurality of obstacles, and accordingly, the V disparity map 101 includes an envelope map (line map of obstacles) corresponding to each obstacle, lines in the envelope map of the obstacles in the V disparity map 101 may be partially missing, for example, the V disparity map 101 of the image to be recognized shown in fig. 1 includes an obstacle 101-1 and an obstacle 101-2, and a partial line of the obstacle 101-1 is missing in the envelope map of the obstacle 101-1. The recognition model 102 is obtained by learning a large number of samples, and optionally, the recognition module 102 may include a plurality of obstacle types and window parameters corresponding to each obstacle type, for example, the window parameters may be used to indicate the length, width, and other information of the window.
In this application, when identifying an obstacle in an image to be identified, a starting pixel of the obstacle may be first determined in a V disparity map of the image to be identified, a detection window may be determined according to a window parameter corresponding to an obstacle type in an identification model, and an image in the determined detection window may be identified according to the obstacle type corresponding to the detection window, so as to determine a recognition degree of the image in the detection window, where the recognition degree of the image in the detection window is: and determining the similarity between the image in the detection window and a preset image corresponding to the detection window, and determining the obstacle type corresponding to the detection window with the highest recognition degree as the obstacle type of the obstacle.
For example, referring to fig. 1, when identifying an obstacle 101-1, a detection window a, a detection window B, and a detection window C corresponding to the obstacle 101-1 may be determined according to the identification model. Assuming that the types of the obstacles corresponding to the detection window a, the detection window B and the detection window C are pedestrians, small vehicles and buses respectively, determining the recognition degree of the image in the detection window a (the similarity between the image in the detection window a and a preset pedestrian) as a first recognition degree, determining the recognition degree of the image in the detection window B (the similarity between the image in the detection window B and a preset small vehicle) as a second recognition degree, and determining the recognition degree of the image in the detection window C (the similarity between the image in the detection window C and a preset bus) as a third recognition degree. It is assumed that the second recognition degree is the highest, and therefore, the obstacle 101-1 can be determined to be a small vehicle.
The technical means shown in the present application will be described in detail below with reference to specific examples. It should be noted that the following embodiments may be combined with each other, and the description of the same or similar contents in different embodiments is not repeated.
Fig. 2 is a first flowchart of an obstacle identification method according to an embodiment of the present invention. Referring to fig. 2, the method may include:
s201, oblique lines are extracted from the V disparity map of the image to be recognized.
The execution subject of the embodiment of the present invention may be an obstacle recognition device. Alternatively, the obstacle recognition device may be implemented by software, or the obstacle recognition device may also be implemented by a combination of software and hardware.
Alternatively, the image to be recognized may be an image captured by an image capturing apparatus.
Optionally, a disparity map of the image to be recognized may be obtained first, a V disparity map corresponding to the disparity map is obtained, and an oblique line is extracted from the V disparity map. It should be noted that, reference may be made to the prior art for a process of acquiring a disparity map of an image to be recognized and a process of acquiring a V disparity map of a disparity map, and this is not specifically limited in the embodiment of the present invention.
Optionally, the value of each pixel in the disparity map is the disparity of the pixel.
Optionally, the number of rows of the V disparity map is the same as the number of rows of the disparity map. The number of columns of the V disparity map is the same as the range of disparity in the disparity map. For example, assuming that the disparity range in the disparity map is 0-5 (total 6 disparities), the number of columns in the V disparity map is 6.
Optionally, the value of the pixel (i, j) in the V disparity map is: the number of pixels having a disparity j in the ith row in the disparity map. For example, if the number of pixels having a disparity of 3 in the 2 nd row in the disparity map is 3, the value of the pixel in the 2 nd row and the 3 rd column in the V disparity map is 3.
The existing theory proves that: the partially flat ground appears as a slanted straight line in the V-disparity map. The oblique line in the embodiment of the present invention includes a straight line on which the ground is located, and for example, the oblique straight line may be taken as the oblique line or a part of the oblique line.
Optionally, when the user cuts the V disparity map differently, the shape of the obstacle, the angle of the obstacle in the V disparity map, the position of the obstacle, and the like are different, and the oblique line shown in the embodiment of the present invention is related to the shape of the obstacle in the V disparity map, the angle (lateral direction, forward direction, and the like) at which the obstacle is located, the position of the obstacle in the V disparity map, and the like. For example, the oblique line may include a straight line on which the lower edge of the obstacle in the V-disparity map is located; or, the oblique line may include a straight line where the lower edge of the obstacle in the V-disparity map is located and a partial upper edge line of the V-disparity map; or, the oblique line may include a straight line where the lower edge of the obstacle in the V-disparity map is located and a partial lower edge line of the V-disparity map; alternatively, the oblique lines may include a straight line on which the lower edge of the obstacle in the V-disparity map is located, a partial upper edge line of the V-disparity map, and a partial lower edge line of the V-disparity map.
Optionally, a straight line may be fitted in the V disparity map to obtain an initial oblique line, where the initial oblique line may be an oblique line in the embodiment of the present invention, or a part of the oblique line in the embodiment of the present invention. Whether two ends of the initial oblique line can extend to the left edge and the right edge of the V parallax image is judged, if yes, the initial oblique line can be determined to be the oblique line in the embodiment of the invention, and if not, the initial oblique line can be extended along the upper edge and/or the lower edge of the V parallax image, so that the initial oblique line can extend to the left edge and the right edge of the V parallax image. Specifically, the following four cases may be included. Next, oblique lines will be described in detail with reference to fig. 3A to 3D.
Fig. 3A is a first schematic diagram of an oblique line provided in the embodiment of the present invention. Fig. 3B is a schematic diagram of an oblique line according to an embodiment of the present invention. Fig. 3C is a third schematic diagram of an oblique line provided in the embodiment of the present invention. Fig. 3D is a fourth schematic diagram of an oblique line provided in the embodiment of the present invention. In fig. 3A to 3D, the V disparity map and oblique lines extracted from the V disparity map are included.
Referring to fig. 3A, the initial slant line fitted to the V-disparity map may extend to the left and right edges of the V-disparity map, and therefore, the initial slant line may be directly determined as the slant line S1. In this case, the oblique line S1 includes a straight line on which the lower edge of the obstacle in the V parallax map is located.
Referring to fig. 3B, the left end of the initial oblique line obtained by fitting in the V-disparity map may extend to the right edge of the V-disparity map, but the right end of the initial oblique line only extends to the point M of the V-disparity map and cannot extend to the left edge of the V-disparity map. Therefore, the initial oblique line may be extended leftward along the upper edge of the V-disparity map, resulting in the oblique line S2. In this case, the oblique line S2 includes a straight line on which the lower edge of the obstacle in the V parallax map is located and a partial upper edge line of the V parallax map.
Referring to fig. 3C, the right end of the initial oblique line obtained by fitting in the V-disparity map may extend to the left edge of the V-disparity map, but the left end of the initial oblique line only extends to the N point of the V-disparity map, and cannot extend to the right edge of the V-disparity map. Therefore, the initial oblique line may be extended to the right along the lower edge of the V-disparity map, resulting in oblique line S3. In this case, the oblique line S3 includes a straight line on which the lower edge of the obstacle in the V parallax map is located and a partial lower edge line of the V parallax map.
Referring to fig. 3D, the left end of the initial oblique line obtained by fitting in the V-disparity map can only extend to the point P of the V-disparity map and cannot extend to the left edge of the V-disparity map, and the right end of the initial oblique line can only extend to the point Q of the V-disparity map and cannot extend to the right edge of the V-disparity map. Therefore, the initial oblique line may be extended to the left along the upper edge of the V-disparity map and to the right along the lower edge of the V-disparity map, resulting in the oblique line S4. In this case, the oblique line S4 includes a straight line on which the lower edge of the obstacle in the V parallax map is located, a partial upper edge line of the V parallax map, and a partial lower edge line of the V parallax map.
Optionally, oblique lines may be extracted from the V-disparity map by using an algorithm such as hough transform, RANSAC, or least square method, which is not described herein again in the embodiments of the present invention.
S202, determine the first start pixel on the oblique line.
The sum of the values of the pixels in the pixel set corresponding to the first starting pixel is greater than a first threshold, and the pixel set corresponding to the first starting pixel comprises the pixels which are positioned above the oblique line in the column where the first starting pixel is positioned.
Alternatively, in the process of performing obstacle recognition in the V disparity map, obstacle recognition may be performed from left to right, or from right to left.
Optionally, pixels on the oblique line may be sequentially determined as first pixels from one end of the oblique line along an extending direction of the oblique line according to a direction of obstacle recognition (from left to right or from right to left), and a sum of values of pixels in a pixel set corresponding to the first pixels is obtained until the sum of values of pixels in the pixel set corresponding to the first pixels determined on the oblique line is greater than a first threshold, and the first pixel is determined as a first start pixel.
Optionally, the sum of the values of the pixels in the pixel set corresponding to the first pixel is used to indicate the number of pixels in the obstacle included in the column of the first pixel in the V disparity map. The larger the sum of the values is, the larger the number of pixels indicating an obstacle included in the column in which the first pixel is located.
Optionally, one end of the oblique line may be an end point of the oblique line, or may be an end point of a detection window in the oblique line. For example, when the obstacle recognition of the V-disparity map is started, one end of the oblique line is one end point of the oblique line. When the V disparity map includes a plurality of obstacles and one obstacle is recognized, and then obstacle recognition is continued, one end of the oblique line is an end point of the detection window determined last time in the oblique line.
Next, a process of determining the first start pixel will be described in detail with reference to fig. 4.
Fig. 4 is a schematic diagram of a process for determining a first start pixel according to an embodiment of the present invention. Referring to fig. 4, the V disparity map includes an obstacle 401, an obstacle 402, and a slant line S.
The direction of obstacle recognition is assumed to be from right to left. Initially, from the rightmost side of the oblique line S, the pixel a is determined as the first pixel, and the sum of the values of the pixels in the column of the pixel a located above the oblique line S is obtained, and whether the sum of the values is greater than the first threshold is determined. As can be seen from fig. 4, the column of the pixel a does not include the pixel in the obstacle 401, and therefore, the sum of the values of the pixels located above the slope S in the column of the pixel a is smaller than the first threshold.
Further, the pixel B is determined as the first pixel, and it is judged that the pixel B does not satisfy the condition as the first start pixel. Further, the pixel C is determined as the first pixel, and it is judged that the pixel C does not satisfy the condition as the first start pixel. Further, the pixel D is determined as a first pixel, and if it is determined that the pixel D satisfies the condition as a first start pixel, the pixel D is determined as a first start pixel.
Assuming that the obstacle 401 is identified in the V disparity map, it is necessary to continue to determine a new first starting pixel. Specifically, the method comprises the following steps:
according to the extending direction of the oblique line, the pixel E is determined as a first pixel, and the pixel E is judged not to meet the condition of being a first starting pixel. Further, the pixel F is determined as the first pixel, and it is judged that the pixel F does not satisfy the condition as the first start pixel. Further, the pixel G is determined as a first pixel, and if it is determined that the pixel G satisfies the condition as a first start pixel, the pixel G is determined as a first start pixel.
S203, determining a detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and the window parameter corresponding to the preset obstacle type.
Optionally, the number of the preset barrier types may be 1 or multiple, and in the actual application process, the number of the preset barrier types may be set according to actual needs, which is not specifically limited in the embodiment of the present invention.
In the embodiment of the invention, one preset barrier type corresponds to one window parameter, and the preset barrier type and the corresponding window parameter are obtained by learning a large number of samples in advance.
For example, the preset barrier type and window parameter correspondence may be as shown in table 1:
TABLE 1
Preset barrier type Window parameters
Bus with a movable rail Length: z1, high: y1
Truck Length: z2, high: y2
Small-sized vehicle Length: z3, high: y3
Non-motor vehicle Length: z4, high: y4
…… ……
Optionally, when the obstacle identification direction is from right to left, the first starting pixel is used as the lower right corner of the detection window, and the detection window corresponding to each preset obstacle type is determined according to the window parameter corresponding to each preset obstacle type.
Optionally, when the obstacle identification direction is from left to right, the first starting pixel is used as the lower left corner of the detection window, and the detection window corresponding to each preset obstacle type is determined according to the window parameter corresponding to each preset obstacle type.
The number of the detection windows determined is generally the same as the number of the preset obstacle types. For example, assuming that 5 preset obstacle types are preset, 5 detection windows corresponding to the first starting pixel may be determined.
It should be noted that, in the embodiment shown in fig. 5, a method for determining the detection window is described in detail, and will not be described here.
And S204, determining the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window.
Optionally, after determining that the detection window is obtained, the detection window may be preprocessed to extract an image in the detection window.
For example, the pre-processing may include: the detection window is subjected to a process of removing a slant line, randomly adding or deleting points on the slant line, and performing an appropriate enlargement or reduction process. Of course, in an actual application process, the preset process may also include other processes, and this is not specifically limited in the embodiment of the present invention.
Optionally, the image in the detection window may be detected according to a preset obstacle type corresponding to the detection window to obtain an identification degree of the image in the detection window, the target detection window is determined according to the identification degree of the image in the detection window, the identification degree of the image in the target detection window is the highest, and the obstacle type corresponding to the target window is determined as the obstacle type identified in the image to be identified.
Assuming that the detection window includes a second detection window corresponding to the second obstacle type, the recognition degree of the image in the second detection window may be obtained according to the following feasible implementation manners, including: and acquiring a standard image corresponding to the second obstacle type, determining the similarity between the image in the second detection window and the standard image, and determining the similarity as the recognition degree of the image in the second detection window.
Optionally, a plurality of features of the image in the second detection window may be extracted, a plurality of features of the standard image may be extracted, and the plurality of features of the image in the second detection window may be matched with the plurality of features of the standard image to determine the similarity between the image in the second detection window and the standard image. Wherein the greater the number of matches of the features of the image in the second detection window with the features of the standard image, the higher the similarity.
Optionally, the feature extraction method may include HOG, LBP, DPM, and the like.
For example, assuming that the preset obstacle type is a bus, and the detection window determined according to the window parameter corresponding to the preset obstacle type "bus" is the detection window 1, when detecting the image in the detection window 1, the image in the detection window 1 may be compared with the preset bus image to obtain the similarity between the image in the detection window 1 and the bus, and the similarity between the image in the detection window 1 and the bus is determined as the recognition degree of the image in the detection window 1.
Optionally, when performing image recognition on the image in the detection window, the features of the image in the detection window may be extracted first, and obstacle recognition may be performed according to the features of the image in the detection window, for example, the feature extraction method may include HOG, LBP, DPM, and the like. Obstacle identification can also be performed by means of a convolutional neural network.
It should be noted that, in an actual application process, when the image to be recognized includes a plurality of obstacles, by repeatedly executing the embodiment shown in fig. 2, each obstacle can be recognized in the image to be recognized, and accordingly, the recognition result may include the type of the obstacle included in the image to be recognized and the position of the type of the obstacle in the image to be recognized.
According to the obstacle identification method provided by the embodiment of the invention, when the obstacle identification needs to be carried out on the image to be identified, the oblique line is extracted from the V parallax image of the image to be identified, the first starting pixel is determined on the oblique line, the detection window corresponding to the first starting pixel is determined in the V parallax image according to the parallax of the first starting pixel and the window parameter corresponding to the preset obstacle type, and the obstacle type identified in the image to be identified is determined according to the identification degree of the image in the detection window.
In the above process, only one of the preset obstacle types is the obstacle type corresponding to the obstacle, so that only one of the obtained detection windows is determined as the window where the obstacle is located (hereinafter referred to as a correct detection window), and none of the other detection windows is determined as the window where the obstacle is located (hereinafter referred to as an incorrect detection window). Since the type of the preset obstacle corresponding to the wrong detection window is different from the type of the obstacle in the detection window, the degree of image recognition in the wrong detection window is usually lower than the preset threshold. And the type of the preset obstacle corresponding to the correct detection window is the same as the type of the obstacle in the detection window, and even if the obstacle envelope line in the V disparity map is partially missing, the image recognition degree in the correct detection window is still larger than the preset threshold value.
Therefore, even if the envelope line of the obstacle in the V-disparity map of the image to be recognized is partially missing, the recognition degree of the image in the correct detection window is much higher than that of the image in the wrong detection window, so that even if the envelope line of the obstacle in the V-disparity map of the image to be recognized is partially missing, the type of the obstacle can be accurately recognized in the image to be recognized, and the accuracy of obstacle recognition is further improved.
On the basis of any of the above embodiments, optionally, a detection window corresponding to the first starting pixel may be determined in the V-disparity map according to the disparity of the first starting pixel and the window parameter corresponding to the preset obstacle type through a feasible implementation manner (S203 in the embodiment shown in fig. 2), specifically, please refer to the embodiment shown in fig. 5.
It should be noted that the process of determining each detection window in the detection windows is the same, and the following description takes a preset obstacle type as a first obstacle type, a window parameter as a first window parameter, and a determined detection window as a first detection window as an example.
Fig. 5 is a flowchart illustrating a method for determining a first detection window according to an embodiment of the present invention. Referring to fig. 5, the method may include:
s501, obtaining a first window parameter corresponding to the first obstacle type.
Optionally, the first window parameter may include a length, a width, a height, and the like of the obstacle corresponding to the first obstacle type. Of course, in an actual application process, the content included in the first window parameter may be set according to actual needs, and this is not specifically limited in the embodiment of the present invention.
S502, determining the size of a first detection window according to the identification direction of the obstacle, the parallax of the first starting pixel, the first window parameter and the camera parameter for shooting the image to be identified.
Alternatively, the obstacle recognition direction includes left to right and right to left.
Optionally, the camera shown in the embodiment of the present invention is generally a binocular camera, and accordingly, the camera parameters generally include a base line length, a focal length, and the like of the binocular camera.
Optionally, the size of the first detection window generally includes the length and width of the first detection window.
Alternatively, when the obstacle recognition direction is from right to left, the width of the first detection window may be determined by the following formula one, where the width of the first detection window refers to the lateral length.
Figure BDA0001517206910000141
Wherein x is the width of the first detection window, d is the parallax of the first start pixel, Δ z is the preset width in the first window parameter, B is the base line length of the binocular camera, and f is the focal length of the binocular camera.
Alternatively, when the obstacle recognition direction is from left to right, the width of the first detection window may be determined by the following formula two, where the width of the first detection window refers to the lateral length.
Wherein x is the width of the first detection window, d is the parallax of the first start pixel, Δ z is the preset width in the first window parameter, B is the base line length of the binocular camera, and f is the focal length of the binocular camera.
Alternatively, the height of the first detection window may be determined by the following formula three, wherein the height of the first detection window refers to the longitudinal length.
Figure BDA0001517206910000143
Wherein y is the height of the first detection window, d is the parallax of the first start pixel, Δ y is the preset height in the first window parameter, and B is the baseline length of the binocular camera.
It should be noted that, the above is only a manner of schematically determining the size of the first detection window in an exemplary form, and of course, in an actual application process, the size of the first detection window may also be determined according to actual needs, which is not specifically limited in this embodiment of the present invention.
And S503, determining a first detection window in the V disparity map according to the size of the first detection window.
Optionally, the first detection window may also be determined according to a preset window shape, for example, the preset window shape may be a polygon such as a rectangle, a trapezoid, and the like. Of course, in the actual application process, the preset window shape may be set according to actual needs, and this is not specifically limited in the embodiment of the present invention.
Alternatively, the position of the first start pixel in the first detection window may be determined according to the obstacle identification direction, and the first detection window may be determined in the V disparity map according to the position of the first start pixel in the first detection window and the size of the first detection window.
Optionally, when the obstacle recognition direction is from right to left, the first start pixel is used as a lower right corner of the first detection window, and the first detection window is determined in the V disparity map according to the size of the first detection window.
Optionally, when the obstacle recognition direction is from left to right, the first start pixel is used as a lower left corner of the first detection window, and the first detection window is determined in the V-disparity map according to the size of the first detection window.
In the embodiment shown in fig. 5, the determined size of the first detection window matches the size of the obstacle corresponding to the first obstacle type.
Next, the above method embodiment is further described in detail by specific examples with reference to fig. 6.
Fig. 6 is a schematic diagram of an obstacle identification process according to an embodiment of the present invention. Referring to fig. 6, scenes 601 to 603 are included.
Referring to the scene 601, after a V disparity map of an image to be recognized is obtained, an oblique line S is extracted from the V disparity map. Referring to the scene 601, the oblique line S includes a straight line where the lower edge of the obstacle in the V disparity map is located and a partial lower edge line of the disparity map.
Referring to the scene 602, assuming that the obstacle recognition direction is from right to left, from the rightmost end of the oblique line, the pixel a is now determined as the first pixel, and it is determined that the pixel a does not satisfy the condition as the first start pixel. The pixel B is further determined as the first pixel, and it is judged that the pixel B does not satisfy the condition as the first start pixel. The pixel C is further determined as the first pixel, and it is judged that the pixel C does not satisfy the condition as the first start pixel. Further, the pixel D is determined as the first pixel, and if it is determined that the pixel D satisfies the condition as the first start pixel, the pixel D is determined as the first start pixel.
Referring to the scene 603, it is assumed that 4 types of obstacles are preset, which are pedestrians, non-motor vehicles, small vehicles and buses respectively. And determining the detection window K1 according to the window parameter corresponding to the pedestrian by taking the position of the pixel D as the lower right corner. And determining a detection window K2 according to the window parameters corresponding to the non-motor vehicle by taking the position of the pixel D as the lower right corner. And determining the detection window K3 according to the window parameters corresponding to the small vehicle by taking the position of the pixel D as the lower right corner. And determining a detection window K4 according to window parameters corresponding to the bus by taking the position of the pixel D as the lower right corner.
And comparing the image in the detection window K1 with the standard pedestrian image to obtain the similarity between the detection window K2 and the standard pedestrian image, and determining the similarity as the recognition degree of the image in the detection window K1 and marking as the recognition degree 1. And comparing the image in the detection window K2 with the standard non-motor vehicle image to obtain the similarity between the detection window K2 and the standard non-motor vehicle image, and determining the similarity as the recognition degree of the image in the detection window K2 and marking as the recognition degree 2. And comparing the image in the detection window K3 with the standard small vehicle image to obtain the similarity between the detection window K3 and the standard small vehicle image, and determining the similarity as the recognition degree of the image in the detection window K3 and marking as the recognition degree 3. And comparing the image in the detection window K4 with the standard bus image to obtain the similarity between the detection window K4 and the standard bus image, determining the similarity as the recognition degree of the image in the detection window K4, and recording as the recognition degree 4. Specifically, the images to be compared and the degree of identification may be as shown in table 2:
TABLE 2
Figure BDA0001517206910000161
And comparing the recognition degrees 1-4, and if the recognition degree 3 is determined to be the maximum, determining the detection window 3 as a target detection window, and determining the type of the obstacle (small-sized vehicle) corresponding to the detection window 3 as the type of the obstacle recognized in the image to be recognized.
Fig. 7 is a schematic structural diagram of an obstacle identification device according to an embodiment of the present invention. Referring to fig. 7, it includes an extracting module 11, a first determining module 12, a second determining module 13 and a third determining module 14, wherein,
the extraction module 11 is configured to extract an oblique line from a V disparity map of an image to be recognized;
the first determining module 12 is configured to determine a first starting pixel on the oblique line, where a sum of values of pixels in a pixel set corresponding to the first starting pixel is greater than a first threshold, and the pixel set corresponding to the first starting pixel includes a pixel located above the oblique line in a column in which the first starting pixel is located;
the second determining module 13 is configured to determine, according to the disparity of the first starting pixel and a window parameter corresponding to a preset obstacle type, a detection window corresponding to the first starting pixel in the V disparity map;
the third determining module 14 is configured to determine the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window.
The obstacle identification device provided by the embodiment of the invention can execute the technical scheme shown in the method embodiment, the implementation principle and the beneficial effect are similar, and the details are not repeated here.
In a possible implementation, the first determining module 12 is specifically configured to:
determining pixels on the oblique line as first pixels in sequence along the extension direction of the oblique line from one end of the oblique line, and obtaining the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels until the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels determined and obtained on the oblique line is greater than the first threshold value, and determining the first pixels as the first starting pixels;
the pixel set corresponding to the first pixel comprises pixels which are positioned above the oblique line in the column where the first pixel is positioned.
In another possible implementation, the preset obstacle type includes a first obstacle type, and the first obstacle type corresponds to a first window parameter; the second determining module 13 is specifically configured to:
acquiring a first window parameter corresponding to the first obstacle type;
determining the size of the first detection window according to the identification direction of the obstacle, the parallax of the first starting pixel, the first window parameter and the camera parameter for shooting the image to be identified;
and determining the first detection window in the V disparity map according to the size of the first detection window, wherein the first starting pixel is a pixel at one corner of the first detection window.
In another possible implementation manner, the second determining module 13 is specifically configured to:
determining a position of the first start pixel in the first detection window according to the obstacle identification direction;
and determining the first detection window in the V disparity map according to the position of the first starting pixel in the first detection window and the size of the first detection window.
In another possible implementation manner, the third determining module 14 is specifically configured to:
detecting the image in the detection window according to the type of a preset obstacle corresponding to the detection window so as to obtain the recognition degree of the image in the detection window;
determining a target detection window according to the recognition degree of the image in the detection window, wherein the recognition degree of the image in the target detection window is the highest;
and determining the type of the obstacle corresponding to the target window as the type of the obstacle identified in the image to be identified.
In another possible embodiment, the detection window comprises a second detection window, the second detection window corresponding to a second obstacle type; the third determining module 14 is specifically configured to:
acquiring a standard image corresponding to the second obstacle type;
determining the similarity between the image in the second detection window and the standard image;
and determining the similarity as the recognition degree of the image in the second detection window.
In another possible embodiment, the oblique line includes a straight line where a lower edge of an obstacle in the V-disparity map is located; alternatively, the first and second electrodes may be,
the oblique line includes a straight line where a lower edge of the obstacle in the V-disparity map is located, and a portion in an upper edge line and/or a lower edge line of the V-disparity map.
The obstacle identification device provided by the embodiment of the invention can execute the technical scheme shown in the method embodiment, the implementation principle and the beneficial effect are similar, and the details are not repeated here.
Fig. 8 is a schematic structural diagram of an obstacle identification terminal according to an embodiment of the present invention. Referring to fig. 8, the system includes a processor 21, a memory 22, a camera assembly 23, and a communication bus 24, where the communication bus 24 is used for implementing connection between various components, and in which,
the processor 21 is a control center of the obstacle recognition terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the obstacle recognition terminal and processes data by running or executing software programs and/or modules stored in the memory 22 and calling data stored in the memory 22, thereby monitoring the entire terminal.
The memory 22 may be used to store software programs and modules, and the processor 21 executes various functional applications and data processing by operating the software programs and modules stored in the memory 22. The memory 22 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the obstacle recognition terminal (such as a captured image, a calculated parallax image, or a processed gray image, etc.), and the like.
The camera assembly 23 is used for acquiring an image and transmitting the image to the memory 21 and/or the processor 22; the memory 21 is used for storing program instructions; the processor 22 is configured to read the program instructions in the memory 21 and execute the method according to any of the above embodiments according to the program instructions in the memory 21.
Optionally, the number of the camera assemblies may be 1, or may be multiple. In an actual application process, the number of the camera assemblies may be set according to actual needs, which is not specifically limited in the embodiment of the present invention.
An embodiment of the present invention provides a computer storage medium, where the computer storage medium stores computer-executable instructions for causing a computer to perform the method according to any of the above embodiments.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform any of the methods described above. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the embodiments of the present invention.

Claims (10)

1. An obstacle recognition method, comprising:
extracting oblique lines from a V disparity map of an image to be recognized;
determining a first starting pixel on the inclined line, wherein the sum of the numerical values of pixels in a pixel set corresponding to the first starting pixel is greater than a first threshold value, and the pixel set corresponding to the first starting pixel comprises pixels located above the inclined line in a column where the first starting pixel is located;
determining a detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and a window parameter corresponding to a preset obstacle type;
and determining the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window.
2. The method of claim 1, wherein determining a first starting pixel on the diagonal comprises:
determining pixels on the oblique line as first pixels in sequence along the extension direction of the oblique line from one end of the oblique line, and obtaining the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels until the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels determined and obtained on the oblique line is greater than the first threshold value, and determining the first pixels as the first starting pixels;
the pixel set corresponding to the first pixel comprises pixels which are positioned above the oblique line in the column where the first pixel is positioned.
3. The method according to claim 1 or 2, wherein the preset obstacle type comprises a first obstacle type, the first obstacle type corresponding to a first window parameter; determining a first detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and the first window parameter, wherein the determining comprises:
acquiring a first window parameter corresponding to the first obstacle type;
determining the size of the first detection window according to the identification direction of the obstacle, the parallax of the first starting pixel, the first window parameter and the camera parameter for shooting the image to be identified;
and determining the first detection window in the V disparity map according to the size of the first detection window, wherein the first starting pixel is a pixel at one corner of the first detection window.
4. The method of claim 3, wherein determining the first detection window in the V disparity map according to the size of the first detection window comprises:
determining a position of the first start pixel in the first detection window according to the obstacle identification direction;
and determining the first detection window in the V disparity map according to the position of the first starting pixel in the first detection window and the size of the first detection window.
5. The method according to claim 1 or 2, wherein determining the type of the obstacle identified in the image to be identified according to the degree of identification of the image in the detection window comprises:
detecting the image in the detection window according to the type of a preset obstacle corresponding to the detection window so as to obtain the recognition degree of the image in the detection window;
determining a target detection window according to the recognition degree of the image in the detection window, wherein the recognition degree of the image in the target detection window is the highest;
and determining the type of the obstacle corresponding to the target window as the type of the obstacle identified in the image to be identified.
6. The method of claim 5, wherein the detection window comprises a second detection window, the second detection window corresponding to a second obstacle type; detecting the image in the second detection window according to the second obstacle type to acquire the recognition degree of the image in the second detection window, including:
acquiring a standard image corresponding to the second obstacle type;
determining the similarity between the image in the second detection window and the standard image;
and determining the similarity as the recognition degree of the image in the second detection window.
7. The method according to claim 1 or 2,
the oblique line comprises a straight line where the lower edge of the obstacle in the V parallax image is located; alternatively, the first and second electrodes may be,
the oblique line comprises a straight line where the lower edge of the obstacle in the V parallax map is located and at least one of the following: a portion of an upper edge line of the V disparity map, a portion of a lower edge line of the V disparity map.
8. An obstacle recognition apparatus, comprising an extraction module, a first determination module, a second determination module, and a third determination module, wherein,
the extraction module is used for extracting oblique lines from a V parallax image of an image to be identified;
the first determining module is configured to determine a first starting pixel on the diagonal line, where a sum of values of pixels in a pixel set corresponding to the first starting pixel is greater than a first threshold, and the pixel set corresponding to the first starting pixel includes a pixel located above the diagonal line in a column where the first starting pixel is located;
the second determining module is used for determining a detection window corresponding to the first starting pixel in the V disparity map according to the disparity of the first starting pixel and a window parameter corresponding to a preset obstacle type;
and the third determining module is used for determining the type of the obstacle identified in the image to be identified according to the identification degree of the image in the detection window.
9. The apparatus of claim 8, wherein the first determining module is specifically configured to:
determining pixels on the oblique line as first pixels in sequence along the extension direction of the oblique line from one end of the oblique line, and obtaining the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels until the sum of the numerical values of the pixels in the pixel set corresponding to the first pixels determined and obtained on the oblique line is greater than the first threshold value, and determining the first pixels as the first starting pixels;
the pixel set corresponding to the first pixel comprises pixels which are positioned above the oblique line in the column where the first pixel is positioned.
10. An obstacle identification terminal comprising a processor, a memory, a camera assembly, and a communication bus for enabling connection between components, the memory storing program instructions, the processor reading the program instructions in the memory and executing the method of any one of claims 1 to 7 according to the program instructions in the memory.
CN201711389258.5A 2017-12-21 2017-12-21 Obstacle identification method and device and terminal Active CN107977649B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711389258.5A CN107977649B (en) 2017-12-21 2017-12-21 Obstacle identification method and device and terminal
PCT/CN2018/091638 WO2019119752A1 (en) 2017-12-21 2018-06-15 Obstacle recognition method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711389258.5A CN107977649B (en) 2017-12-21 2017-12-21 Obstacle identification method and device and terminal

Publications (2)

Publication Number Publication Date
CN107977649A CN107977649A (en) 2018-05-01
CN107977649B true CN107977649B (en) 2020-02-07

Family

ID=62007054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711389258.5A Active CN107977649B (en) 2017-12-21 2017-12-21 Obstacle identification method and device and terminal

Country Status (2)

Country Link
CN (1) CN107977649B (en)
WO (1) WO2019119752A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977649B (en) * 2017-12-21 2020-02-07 海信集团有限公司 Obstacle identification method and device and terminal
CN111284490B (en) * 2018-12-06 2021-06-04 海信集团有限公司 Method for detecting vehicle sliding of front vehicle by vehicle-mounted binocular camera and vehicle-mounted binocular camera
CN109584632A (en) * 2018-12-14 2019-04-05 深圳壹账通智能科技有限公司 Road conditions method for early warning, device, computer equipment and storage medium
CN111932506B (en) * 2020-07-22 2023-07-14 四川大学 Method for extracting discontinuous straight line in image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2602761A4 (en) * 2010-08-03 2017-11-01 Panasonic Intellectual Property Management Co., Ltd. Object detection device, object detection method, and program
CN103123722B (en) * 2011-11-18 2016-04-27 株式会社理光 Road object detection method and system
JP6440411B2 (en) * 2014-08-26 2018-12-19 日立オートモティブシステムズ株式会社 Object detection device
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system
CN107977649B (en) * 2017-12-21 2020-02-07 海信集团有限公司 Obstacle identification method and device and terminal

Also Published As

Publication number Publication date
WO2019119752A1 (en) 2019-06-27
CN107977649A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN107977649B (en) Obstacle identification method and device and terminal
US10140529B2 (en) Method, apparatus and device for detecting lane lines
US9053361B2 (en) Identifying regions of text to merge in a natural image or video frame
CN104050654B (en) road edge detection method and device
KR101281260B1 (en) Method and Apparatus for Recognizing Vehicle
EP2955664A1 (en) Traffic lane boundary line extraction apparatus, traffic lane boundary line extraction method, and program
JP2013109760A (en) Target detection method and target detection system
CN108280829A (en) Welding seam method, computer installation and computer readable storage medium
US11017260B2 (en) Text region positioning method and device, and computer readable storage medium
US20150178573A1 (en) Ground plane detection
CN110667474B (en) General obstacle detection method and device and automatic driving system
US11164012B2 (en) Advanced driver assistance system and method
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
CN113569812A (en) Unknown obstacle identification method and device and electronic equipment
CN111191482A (en) Brake lamp identification method and device and electronic equipment
CN111192214B (en) Image processing method, device, electronic equipment and storage medium
KR101645717B1 (en) Apparatus and method for adaptive calibration of advanced driver assistance system
JP2017058950A (en) Recognition device, image pickup system, and image pickup device, and recognition method and program for recognition
CN111967484A (en) Point cloud clustering method and device, computer equipment and storage medium
CN113450335B (en) Road edge detection method, road edge detection device and road surface construction vehicle
CN112395963B (en) Object recognition method and device, electronic equipment and storage medium
CN112116660B (en) Disparity map correction method, device, terminal and computer readable medium
CN108256510A (en) A kind of road edge line detecting method, device and terminal
KR20180071552A (en) Lane Detection Method and System for Camera-based Road Curvature Estimation
CN112400094B (en) Object detecting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant