CN113297939B - Obstacle detection method, obstacle detection system, terminal device and storage medium - Google Patents

Obstacle detection method, obstacle detection system, terminal device and storage medium Download PDF

Info

Publication number
CN113297939B
CN113297939B CN202110534201.XA CN202110534201A CN113297939B CN 113297939 B CN113297939 B CN 113297939B CN 202110534201 A CN202110534201 A CN 202110534201A CN 113297939 B CN113297939 B CN 113297939B
Authority
CN
China
Prior art keywords
image
lane
obstacle
barrier
free
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110534201.XA
Other languages
Chinese (zh)
Other versions
CN113297939A (en
Inventor
顾在旺
程骏
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202110534201.XA priority Critical patent/CN113297939B/en
Publication of CN113297939A publication Critical patent/CN113297939A/en
Application granted granted Critical
Publication of CN113297939B publication Critical patent/CN113297939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an obstacle detection method, a system, a terminal device and a storage medium, wherein the method comprises the following steps: carrying out lane line detection on the image to be detected to obtain the position information of the lane line; determining a lane driving image in the image to be detected according to the position information of the lane lines, and performing barrier-free prediction on the lane driving image to obtain a barrier-free image; and comparing the lane running image with the barrier-free image to obtain barrier information. According to the method and the device, the lane line detection is carried out on the image to be detected, the position information of the lane line corresponding to the lane line in the image to be detected can be determined, the lane running image in the image to be detected can be determined based on the position information of the lane line, the barrier-free image corresponding to the lane running image is obtained by carrying out barrier-free prediction on the lane running image, and the barrier information on the lane running image can be determined by carrying out image comparison on the lane running image and the barrier-free image.

Description

Obstacle detection method, obstacle detection system, terminal device and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an obstacle detection method, an obstacle detection system, terminal equipment and a storage medium.
Background
With the continuous development of economy and society, some social problems caused by the increase of the number of automobiles are increasingly prominent, such as urban traffic, safety of vehicle driving, energy supply, environmental pollution and the like. The actual social problems are all caused by the contradiction between the existing traffic foundation facilities and the vehicle, which is not only reflected on the problem of traffic jam, but also reflected on environmental pollution caused by unsmooth traffic, relatively backward road conditions and potential safety hazards of advanced vehicle technology to lives and properties of people. The loss of personnel and property caused by traffic accidents is more and more serious in society, and the collision of vehicles is mainly involved in the traffic accidents, so that the problem of detecting obstacles in a lane in the running process of an automobile is more and more important for people.
In the existing obstacle detection process, whether obstacles exist in a lane or not is detected by a target detection algorithm based on deep learning, but the types of the obstacles are not fixed, so that the detection of the obstacles in all types cannot be realized by the target detection algorithm based on the deep learning, and the accuracy of obstacle detection is further reduced.
Disclosure of Invention
The embodiment of the application provides an obstacle detection method, an obstacle detection system, terminal equipment and a storage medium, and aims to solve the problem that in the existing obstacle detection process, the accuracy of obstacle detection is low because the detection of all types of obstacles cannot be realized by a target detection algorithm based on deep learning.
In a first aspect, embodiments of the present application provide a method for detecting an obstacle, the method including:
responding to a received image to be detected, and carrying out lane line detection on the image to be detected to obtain the position information of the lane line;
determining a lane driving image in the image to be detected according to the position information of the lane line;
carrying out barrier-free prediction on the lane driving image to obtain a barrier-free image;
and comparing the lane running image with the barrier-free image to obtain barrier information.
Compared with the prior art, the embodiment of the application has the beneficial effects that: the lane line detection is carried out on the image to be detected, the position information of the lane line corresponding to the lane line in the image to be detected can be effectively determined, the lane running image in the image to be detected can be effectively determined based on the position information of the lane line, the barrier-free image corresponding to the lane running image is obtained by carrying out barrier-free prediction on the lane running image, and the barrier information on the lane running image can be effectively determined by carrying out image comparison on the lane running image and the barrier-free image.
Further, the image comparing the lane driving image with the barrier-free image to obtain barrier information includes:
respectively acquiring pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set;
and determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
Further, the generating the obstacle information according to the obstacle image includes:
performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected, so as to obtain the obstacle coordinates;
And generating the obstacle information according to the obstacle coordinates and the type of the obstacle.
Further, the determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set includes:
respectively determining pixel difference values of the lane driving image and the barrier-free image on the same pixel point according to the first pixel value set and the second pixel value set;
if the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
on the lane travel image, an image formed by the marked pixel points is determined as the obstacle image.
Further, the extracting the image features in the obstacle extraction image includes:
carrying out gray scale processing on the obstacle extraction image to obtain a gray scale image, and carrying out normalization processing on the gray scale image;
and respectively extracting gradients of all pixel points in the gray level image after normalization processing to obtain the image characteristics.
Further, the performing the barrier-free prediction on the lane driving image to obtain a barrier-free image includes:
And inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image.
Further, before the inputting the lane driving image into the pre-trained generating type countermeasure network for image generation, the method further includes:
inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generated countermeasure network for image discrimination to obtain an image discrimination result;
and carrying out loss calculation according to the image discrimination result to obtain a model loss value, and respectively carrying out parameter updating on the generator and the discriminator according to the model loss value until the generator and the discriminator converge to obtain the pre-trained generated countermeasure network.
In a second aspect, embodiments of the present application provide an obstacle detection system, including:
the lane line detection module is used for responding to the received image to be detected, carrying out lane line detection on the image to be detected and obtaining the position information of the lane line;
The barrier-free prediction module is used for determining a lane driving image in the image to be detected according to the position information of the lane line, wherein the lane driving image is an area image formed by the lane line in the image to be detected, and performing barrier-free prediction on the lane driving image to obtain a barrier-free image;
and the image comparison module is used for comparing the lane running image with the barrier-free image to obtain barrier information.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements a method as described above when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as described above.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the obstacle detection method according to any one of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of an obstacle detection method provided in a first embodiment of the present application;
fig. 2 is a flowchart of an obstacle detection method provided in a second embodiment of the present application;
fig. 3 is a schematic structural diagram of an obstacle detecting system according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Example 1
Referring to fig. 1, a flowchart of an obstacle detection method according to a first embodiment of the present application includes the steps of:
and step S10, responding to the received image to be detected, and carrying out lane line detection on the image to be detected to obtain the position information of the lane line.
The lane line detection method comprises the steps of carrying out lane line detection on an image to be detected according to any preset lane line detection algorithm to obtain position information of a lane line in the image to be detected, wherein the preset lane line detection algorithm can be set according to requirements, for example, the preset lane line detection algorithm can be set into a Gaussian blur algorithm, a Canny edge detection algorithm or a Hough transformation algorithm, and the preset lane line detection algorithm is used for carrying out position extraction on the lane line in the image to be detected to obtain the position information of the lane line.
In this step, if there are multiple different lane lines in the image to be detected, after the image to be detected is detected by the lane lines, position information corresponding to each lane line is obtained, and optionally, before the image to be detected is detected by the lane lines, the method further includes: image corrosion is carried out on the image to be detected, and lane line detection is carried out on the image to be detected after the image corrosion, wherein the image corrosion is used for carrying out image detection on the image to be detected according to corrosion operators, the area capable of bearing the corrosion operators in the image to be detected is determined, the image corrosion is a process of eliminating image boundary points and enabling the image boundary to shrink inwards, the process can be used for eliminating small and meaningless pixel points in the image to be detected, and the lane line detection accuracy of the image to be detected is improved
And step S20, determining a lane driving image in the image to be detected according to the position information of the lane lines, and carrying out barrier-free prediction on the lane driving image to obtain a barrier-free image.
The lane driving image is an area image formed by a lane line in the image to be detected, namely, the lane driving image is a lane on a road corresponding to the image to be detected, and in the step, the image extraction is carried out on the image to be detected according to the position information of the lane line, so that the lane driving image in the image to be detected is obtained.
In the step, an image to be detected is reversely selected according to the position information of the lane lines to obtain a background image, the image is reversely selected to be used for selecting images except the position information of the lane lines in the image to be detected, and the background image is filled according to a preset filling color to determine a lane driving image in the image to be detected, wherein the preset filling color can be set according to requirements, for example, the preset filling color can be set to be black or red and the like.
In this step, the image of the lane corresponding to the lane running image is predicted to obtain the image of the obstacle-free image by performing the obstacle-free prediction on the lane running image, for example, when the obtained lane running image is determined to be the lane running image a1 according to the position information of the lane line, the lane corresponding to the lane running image a1 in the image to be detected is the lane b1, and the image of the obstacle-free image c1 corresponding to the lane b1 under the condition of no obstacle is obtained by performing the obstacle-free prediction on the lane running image a 1.
Further, in the step, the performing the barrier-free prediction on the lane driving image to obtain a barrier-free image includes: inputting the lane running image into a pre-trained generating type countermeasure network for image generation to obtain the barrier-free image, wherein the pre-trained generating type countermeasure network is used for carrying out barrier-free prediction on the input lane running image so as to predict and obtain the barrier-free image corresponding to the lane of the lane running image under the barrier-free condition.
Further, in this step, before the inputting the lane driving image into the pre-trained generating type countermeasure network for image generation, the method further includes:
inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
the generating type antagonism network comprises a generator and a discriminator, wherein the generator is used for generating data on an input image, the discriminator is used for discriminating whether the image generated by the generator is a real image or not so as to achieve the effect of gaming on image data, and further the accuracy of generating the image on a lane generated by the generator in the generating type antagonism network is effectively improved.
Inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generated countermeasure network for image discrimination to obtain an image discrimination result;
The judging device is used for judging whether the generated image output by the generator is a real image or not according to the barrier-free image corresponding to the input lane sample image, and judging that the generated image output by the generator is the barrier-free image corresponding to the lane sample image when judging that the generated image output by the generator is the real image.
Carrying out loss calculation according to the image discrimination result to obtain a model loss value, and respectively carrying out parameter updating on the generator and the discriminator according to the model loss value until the generator and the discriminator converge to obtain the pre-trained generated countermeasure network;
the method comprises the steps of obtaining a pre-trained generation type countermeasure network in an image game mode, wherein the pre-trained generation type countermeasure network can output barrier-free images of images to be detected under barrier-free conditions, and optionally, in the step, when the images to be detected are transmitted in a video stream mode, three continuous frame images in the video stream are input into the pre-trained generation type countermeasure network, so that barrier-free images corresponding to the images to be detected of a first frame image in the three continuous frame images are obtained. For example, when the video stream includes the first frame image d1, the second frame image d2, the third frame image d3 and the fourth frame image d4, when the first frame image d1 is an image to be detected, the first frame image d1, the second frame image d2 and the third frame image d3 are input into the pre-trained generated countermeasure network, so as to obtain the barrier-free image corresponding to the first frame image d 1.
And step S30, comparing the lane running image with the barrier-free image to obtain barrier information.
Wherein, by comparing the lane driving image with the barrier-free image without barrier, the barrier information on the lane driving image can be effectively identified.
In this embodiment, the position information of the lane line corresponding to the lane line in the image to be detected can be effectively determined by detecting the lane line of the image to be detected, the lane running image in the image to be detected can be effectively determined based on the position information of the lane line, the barrier-free image corresponding to the lane running image can be obtained by performing barrier-free prediction on the lane running image, and the barrier information on the lane running image can be effectively determined by comparing the lane running image with the barrier-free image.
Example two
Referring to fig. 2, a flowchart of an obstacle detection method according to a second embodiment of the present application is provided, where the second embodiment is used for refining step S30, and includes:
and S31, respectively acquiring pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set.
The image sizes of the lane running image and the barrier-free image are the same, so that the pixel values of the pixel points on the lane running image and the barrier-free image are respectively acquired according to a preset sequencing acquisition rule, and the pixel coordinates of the pixel points corresponding to the same sequencing position are the same between the first pixel value set and the second pixel value set.
For example, when the lane running image and the obstacle-free image are both images of 2×2 pixels, the first set of pixel values includes a pixel point e1, a pixel point e2, a pixel point e3, and a pixel point e4, the second set of pixel values includes a pixel point e5, a pixel point e6, a pixel point e7, and a pixel point e8, the pixel point e1 is the same as the pixel coordinate of the pixel point e5, the pixel point e2 is the same as the pixel coordinate of the pixel point e6, the pixel point e3 is the same as the pixel coordinate of the pixel point e7, and the pixel point e4 is the same as the pixel coordinate of the pixel point e 8.
And step S32, determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
Optionally, in this step, the determining an obstacle image on the lane driving image according to the first set of pixel values and the second set of pixel values includes:
Respectively determining pixel difference values of the lane driving image and the barrier-free image on the same pixel point according to the first pixel value set and the second pixel value set;
and respectively calculating pixel differences of pixel points between the same pixel positions in a first pixel value set and a second pixel value set, so as to obtain pixel differences of the lane running image and the barrier-free image on the same pixel points, for example, when the lane running image and the barrier-free image are images with 2x2 pixels, the first pixel value set comprises a pixel point e1, a pixel point e2, a pixel point e3 and a pixel point e4, and when the second pixel value set comprises a pixel point e5, a pixel point e6, a pixel point e7 and a pixel point e8, respectively calculating pixel differences between the pixel point e1 and the pixel point e5, between the pixel point e2 and the pixel point e6, between the pixel point e3 and the pixel point e7 and between the pixel point e4 and the pixel point e 8.
If the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
the preset threshold value can be set according to requirements, if the pixel difference value of any pixel point is larger than the preset threshold value, it is determined that an obstacle exists in the pixel point on the lane driving image, for example, when the pixel difference value between the pixel point e1 and the pixel point e5 and between the pixel point e2 and the pixel point e6 is larger than the preset threshold value, it is determined that an obstacle exists in the pixel point e1 and the pixel point e2 of the lane driving image, and the pixel point e1 and the pixel point e2 are marked on the lane driving image, so that the determination of a subsequent obstacle image is effectively facilitated.
On the lane travel image, an image formed by the marked pixel points is determined as the obstacle image.
Further, in this step, the generating the obstacle information from the obstacle image includes:
performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
the preset parameter range can be set according to requirements, and comprises a pixel brightness range and a pixel color range, and the accuracy of extracting the image contour in the filtered image is effectively improved by carrying out image filtering on the obstacle image according to the preset parameter range.
Performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
the image contour is the contour of the corresponding obstacle in the obstacle image, so that the image contour is used for extracting the image of the obstacle to obtain an obstacle extracted image corresponding to the obstacle in the obstacle image, and the accuracy of the subsequent determination of the type of the obstacle is improved through the image feature in the obstacle extracted image, and optionally, the image feature comprises the gradient, the color histogram, the color aggregation vector or the texture feature of the pixel point.
Determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
the feature similarity and the contour similarity are obtained by respectively calculating the similarity between the image feature, the image contour and the preset feature and the preset contour of the preset type, if the feature similarity and the contour similarity between the image feature, the image contour and any preset type are all larger than the corresponding preset similarity, the preset type is determined as the type of the obstacle in the obstacle extraction image, the preset similarity can be set according to the requirement, for example, the preset similarity can be set to 80%, 75% or 80% and the like.
For example, when the feature similarity is greater than the first preset similarity and the contour similarity is greater than the second preset similarity between the image feature, the image contour and the preset automobile type, the preset automobile type is determined to be the type of the obstacle in the obstacle extraction image.
Determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected, obtaining the obstacle coordinates, and generating the obstacle information according to the obstacle coordinates and the type of the obstacle;
The method comprises the steps of respectively obtaining image coordinates of an obstacle image in an image to be detected and image coordinates of an obstacle extraction image in the obstacle image to obtain a first coordinate and a second coordinate, determining a coordinate mapping relation between the obstacle extraction image and the image to be detected according to the first coordinate and the second coordinate, and carrying out coordinate mapping on the image coordinates of the obstacle extraction image in the obstacle image according to the determined coordinate mapping relation to obtain the obstacle coordinates.
Still further, in the step, the extracting the image feature in the obstacle extraction image includes: carrying out gray scale processing on the obstacle extraction image to obtain a gray scale image, and carrying out normalization processing on the gray scale image; respectively extracting gradients of all pixel points in the gray level image after normalization processing to obtain the image characteristics; the method comprises the steps of carrying out gray scale processing on an obstacle extraction image to obtain a gray scale image, and carrying out normalization processing on the gray scale image, so that pixel points in the obstacle extraction image can be effectively screened, and the accuracy of image feature extraction in the gray scale image is improved.
In this embodiment, the pixel values of each pixel point on the lane running image and the barrier-free image are respectively obtained to obtain the first pixel value set and the second pixel value set, so that determination of pixel difference values of the lane running image and the barrier-free image on the same pixel point is facilitated, pixel points corresponding to the barrier on the lane running image can be effectively marked based on the determined pixel difference values on the same pixel point, and the barrier image corresponding to the barrier on the lane running image can be effectively determined according to the marked pixel points on the lane running image.
Example III
Fig. 3 shows a schematic structural diagram of an obstacle detection system 100 according to a third embodiment of the present application, corresponding to the obstacle detection method described in the above embodiments, and only the portions related to the embodiments of the present application are shown for convenience of explanation.
Referring to fig. 3, the system includes: a lane line detection module 10, an unobstructed prediction module 11, and an image comparison module 12, wherein:
the lane line detection module 10 is configured to respond to the received image to be detected, perform lane line detection on the image to be detected, and obtain position information of the lane line.
The barrier-free prediction module 11 is configured to determine a lane driving image in the image to be detected according to the position information of the lane line, where the lane driving image is an area image formed by the lane line in the image to be detected, and perform barrier-free prediction on the lane driving image to obtain a barrier-free image.
Wherein the barrier-free prediction module 11 is further configured to: and inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image.
Optionally, the barrier-free prediction module 11 is further configured to: inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
Inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generated countermeasure network for image discrimination to obtain an image discrimination result;
and carrying out loss calculation according to the image discrimination result to obtain a model loss value, and respectively carrying out parameter updating on the generator and the discriminator according to the model loss value until the generator and the discriminator converge to obtain the pre-trained generated countermeasure network.
And an image comparison module 12 for comparing the lane running image with the barrier-free image to obtain barrier information.
Wherein the image comparison module 12 is further configured to: respectively acquiring pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set;
and determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
Optionally, the image comparison module 12 is further configured to: performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
Performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected, so as to obtain the obstacle coordinates;
and generating the obstacle information according to the obstacle coordinates and the type of the obstacle.
Further, the image comparison module 12 is further configured to: respectively determining pixel difference values of the lane driving image and the barrier-free image on the same pixel point according to the first pixel value set and the second pixel value set;
if the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
on the lane travel image, an image formed by the marked pixel points is determined as the obstacle image.
Still further, the image comparison module 12 is further configured to: carrying out gray scale processing on the obstacle extraction image to obtain a gray scale image, and carrying out normalization processing on the gray scale image;
and respectively extracting gradients of all pixel points in the gray level image after normalization processing to obtain the image characteristics.
In this embodiment, the position information of the lane line corresponding to the lane line in the image to be detected can be effectively determined by detecting the lane line of the image to be detected, the lane running image in the image to be detected can be effectively determined based on the position information of the lane line, the barrier-free image corresponding to the lane running image can be obtained by performing barrier-free prediction on the lane running image, and the barrier information on the lane running image can be effectively determined by comparing the lane running image with the barrier-free image.
It should be noted that, because the content of information interaction and execution process between the above devices/modules is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
Fig. 4 is a schematic structural diagram of a terminal device 2 according to a fourth embodiment of the present application. As shown in fig. 4, the terminal device 2 of this embodiment includes: at least one processor 20 (only one processor is shown in fig. 4), a memory 21 and a computer program 22 stored in the memory 21 and executable on the at least one processor 20, the processor 20 implementing the steps in any of the various method embodiments described above when executing the computer program 22.
The terminal device 2 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 20, a memory 21. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the terminal device 2 and does not constitute a limitation of the terminal device 2, and may include more or less components than illustrated, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 20 may be a central processing unit (Central Processing Unit, CPU), and the processor 20 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may in some embodiments be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may in other embodiments also be an external storage device of the terminal device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 21 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (8)

1. A method of detecting an obstacle, the method comprising:
responding to a received image to be detected, and carrying out lane line detection on the image to be detected to obtain the position information of the lane line;
Determining a lane driving image in the image to be detected according to the position information of the lane line, wherein the lane driving image is an area image formed by the lane line in the image to be detected;
carrying out barrier-free prediction on the lane driving image to obtain a barrier-free image;
comparing the lane driving image with the barrier-free image to obtain barrier information;
the performing barrier-free prediction on the lane driving image to obtain a barrier-free image includes:
inputting the lane sample image into a generator in a generating type countermeasure network to generate an image so as to obtain a lane generation image;
inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generated countermeasure network for image discrimination to obtain an image discrimination result;
carrying out loss calculation according to the image discrimination result to obtain a model loss value, and respectively carrying out parameter updating on the generator and the discriminator according to the model loss value until the generator and the discriminator converge to obtain the pre-trained generated countermeasure network;
and inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image.
2. The obstacle detection method as claimed in claim 1, wherein the comparing the lane travel image with the obstacle-free image to obtain the obstacle information includes:
respectively acquiring pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set;
and determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
3. The obstacle detection method as claimed in claim 2, wherein the generating the obstacle information from the obstacle image includes:
performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
Determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected, so as to obtain the obstacle coordinates;
and generating the obstacle information according to the obstacle coordinates and the type of the obstacle.
4. The obstacle detection method as claimed in claim 2, wherein the determining an obstacle image on the lane travel image from the first set of pixel values and the second set of pixel values includes:
respectively determining pixel difference values of the lane driving image and the barrier-free image on the same pixel point according to the first pixel value set and the second pixel value set;
if the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
on the lane travel image, an image formed by the marked pixel points is determined as the obstacle image.
5. The obstacle detection method as claimed in claim 3, wherein the extracting the image features in the obstacle extraction image includes:
carrying out gray scale processing on the obstacle extraction image to obtain a gray scale image, and carrying out normalization processing on the gray scale image;
And respectively extracting gradients of all pixel points in the gray level image after normalization processing to obtain the image characteristics.
6. An obstacle detection system, comprising:
the lane line detection module is used for responding to the received image to be detected, carrying out lane line detection on the image to be detected and obtaining the position information of the lane line;
the barrier-free prediction module is used for determining a lane driving image in the image to be detected according to the position information of the lane line, wherein the lane driving image is an area image formed by the lane line in the image to be detected, and performing barrier-free prediction on the lane driving image to obtain a barrier-free image;
the image comparison module is used for comparing the lane running image with the barrier-free image to obtain barrier information;
the barrier-free prediction module is further configured to: inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image;
inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
Inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generated countermeasure network for image discrimination to obtain an image discrimination result;
and carrying out loss calculation according to the image discrimination result to obtain a model loss value, and respectively carrying out parameter updating on the generator and the discriminator according to the model loss value until the generator and the discriminator converge to obtain the pre-trained generated countermeasure network.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 5.
CN202110534201.XA 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium Active CN113297939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110534201.XA CN113297939B (en) 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110534201.XA CN113297939B (en) 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN113297939A CN113297939A (en) 2021-08-24
CN113297939B true CN113297939B (en) 2024-04-16

Family

ID=77322386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110534201.XA Active CN113297939B (en) 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN113297939B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694115A (en) * 2022-03-24 2022-07-01 商汤集团有限公司 Road obstacle detection method, device, equipment and storage medium
CN115797783A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Method and device for generating barrier-free information, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN108460760A (en) * 2018-03-06 2018-08-28 陕西师范大学 A kind of Bridge Crack image discriminating restorative procedure fighting network based on production
CN109188460A (en) * 2018-09-25 2019-01-11 北京华开领航科技有限责任公司 Unmanned foreign matter detection system and method
CN109993074A (en) * 2019-03-14 2019-07-09 杭州飞步科技有限公司 Assist processing method, device, equipment and the storage medium driven
CN110239592A (en) * 2019-07-03 2019-09-17 中铁轨道交通装备有限公司 A kind of active barrier of rail vehicle and derailing detection system
CN110765922A (en) * 2019-10-18 2020-02-07 华南理工大学 AGV is with two mesh vision object detection barrier systems
CN110929655A (en) * 2019-11-27 2020-03-27 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN108460760A (en) * 2018-03-06 2018-08-28 陕西师范大学 A kind of Bridge Crack image discriminating restorative procedure fighting network based on production
CN109188460A (en) * 2018-09-25 2019-01-11 北京华开领航科技有限责任公司 Unmanned foreign matter detection system and method
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN109993074A (en) * 2019-03-14 2019-07-09 杭州飞步科技有限公司 Assist processing method, device, equipment and the storage medium driven
CN110239592A (en) * 2019-07-03 2019-09-17 中铁轨道交通装备有限公司 A kind of active barrier of rail vehicle and derailing detection system
CN110765922A (en) * 2019-10-18 2020-02-07 华南理工大学 AGV is with two mesh vision object detection barrier systems
CN110929655A (en) * 2019-11-27 2020-03-27 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile

Also Published As

Publication number Publication date
CN113297939A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
Bilal et al. Real-time lane detection and tracking for advanced driver assistance systems
US9082038B2 (en) Dram c adjustment of automatic license plate recognition processing based on vehicle class information
Huang et al. Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads
Li et al. Lane detection based on connection of various feature extraction methods
CN113297939B (en) Obstacle detection method, obstacle detection system, terminal device and storage medium
JP2018194912A (en) Obstacle on-road detection device, method and program
CN110472580B (en) Method, device and storage medium for detecting parking stall based on panoramic image
CN110879950A (en) Multi-stage target classification and traffic sign detection method and device, equipment and medium
CN105426863B (en) The method and apparatus for detecting lane line
CN112149649B (en) Road spray detection method, computer equipment and storage medium
CN112528807B (en) Method and device for predicting running track, electronic equipment and storage medium
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN112257541A (en) License plate recognition method, electronic device and computer-readable storage medium
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN113408364B (en) Temporary license plate recognition method, system, device and storage medium
Chen Road vehicle recognition algorithm in safety assistant driving based on artificial intelligence
CN116721396A (en) Lane line detection method, device and storage medium
CN116343085A (en) Method, system, storage medium and terminal for detecting obstacle on highway
CN105069410A (en) Unstructured road recognition method and device
CN115482672A (en) Vehicle reverse running detection method and device, terminal equipment and storage medium
CN113449647A (en) Method, system, device and computer-readable storage medium for fitting curved lane line
CN113435350A (en) Traffic marking detection method, device, equipment and medium
Zhang et al. The Line Pressure Detection for Autonomous Vehicles Based on Deep Learning
CN114627651B (en) Pedestrian protection early warning method and device, electronic equipment and readable storage medium
CN117372924B (en) Video detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant