WO2021098359A1 - 车道线识别方法、装置、设备和存储介质 - Google Patents

车道线识别方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2021098359A1
WO2021098359A1 PCT/CN2020/115390 CN2020115390W WO2021098359A1 WO 2021098359 A1 WO2021098359 A1 WO 2021098359A1 CN 2020115390 W CN2020115390 W CN 2020115390W WO 2021098359 A1 WO2021098359 A1 WO 2021098359A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
current frame
frame image
area
image
Prior art date
Application number
PCT/CN2020/115390
Other languages
English (en)
French (fr)
Inventor
张懿
贾澜鹏
刘帅成
Original Assignee
成都旷视金智科技有限公司
北京旷视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都旷视金智科技有限公司, 北京旷视科技有限公司 filed Critical 成都旷视金智科技有限公司
Priority to US17/767,367 priority Critical patent/US20220375234A1/en
Publication of WO2021098359A1 publication Critical patent/WO2021098359A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the pixel gray level of the environmental image will be affected by the white balance algorithm of the image sensor, different light intensity and ground reflection, etc., resulting in inaccurate lane lines determined based on the pixel gray level of the environmental image.
  • the foregoing detection of the current frame image collected by the vehicle to determine the multiple detection frames in which the lane line in the current frame image is located includes:
  • the foregoing input of the current frame image into the lane line classification model to obtain multiple detection frames where the lane line in the current frame image is located includes:
  • the multiple detection frames where the lane lines in the current frame image are obtained include:
  • the multiple to-be-recognized images are sequentially input into the lane line classification model to obtain multiple detection frames where the lane line is located.
  • the foregoing determining the connected area according to the position information of the multiple detection frames includes:
  • the multiple detection frames are combined to determine the combined area where the multiple detection frames are located;
  • the connected areas corresponding to the multiple detection frames are determined.
  • the foregoing edge detection on the connected area to determine the location information of the lane line in the connected area includes:
  • the location information where the target edge area is located is used as the location information where the lane line is located.
  • the above-mentioned preset conditions include at least one of the following: the target edge region includes a left edge and a right edge, the distal end width of the target edge region is less than the proximal width, and the distal end width of the target edge region is greater than the proximal end The product of the width and the width factor.
  • the method further includes:
  • target tracking is performed on the next frame of the current frame image, and the position information of the lane line in the next frame image is obtained.
  • the target tracking is performed on the next frame of the current frame image according to the position information of the lane line in the current frame image, and the position information of the lane line in the next frame image is obtained.
  • Target tracking is performed on the target area image, and the position information of the lane line in the next frame of image is obtained.
  • the method further includes:
  • the area image corresponding to the lane line estimation area in the next frame image of the current frame image is selected as the next frame image.
  • the method further includes:
  • the above-mentioned early warning condition includes that the vehicle is traveling on a compaction line, or the duration of the vehicle pressing the dotted line exceeds a preset duration threshold.
  • a lane line recognition device comprising:
  • the detection module is used to detect the current frame image collected by the vehicle, and determine the multiple detection frames where the lane line in the current frame image is located;
  • the first determining module is configured to determine a connected area according to the position information of a plurality of detection frames, and the connected area includes a lane line;
  • the second determining module is used to perform edge detection on the connected area and determine the location information of the lane line in the connected area.
  • a computer device in a third aspect, includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the lane line recognition method when the computer program is executed.
  • a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the above-mentioned lane line recognition method are realized.
  • the location information of the lane line is to first detect the current frame image collected by the vehicle, determine the multiple detection frames where the lane line in the current frame image is located, and based on the multiple Detect the position information of the frame, determine the connected area, the connected area includes the lane line, and then perform edge detection on the connected area to determine the location information of the lane line in the connected area, that is, the location information of the lane line is the current
  • the frame image is divided into multiple detection frames, and then the detection frames are connected to obtain a connected area including the lane line, and then the edge detection of the connected area is obtained, which avoids the location of the determined lane line when the pixel gray level of the environment image changes drastically
  • the problem of inaccurate location information improves the accuracy of the location information where the determined lane line is located.
  • Fig. 1 is a schematic diagram of an application environment of a lane line recognition method in an embodiment
  • Figure 2 is a schematic flow chart of a lane line recognition method in an embodiment
  • Figure 2a is a schematic structural diagram of a lane line recognition model in an embodiment
  • FIG. 3 is a schematic flowchart of a lane line recognition method in another embodiment
  • Figure 4 is a schematic flow chart of a lane line recognition method in another embodiment
  • Figure 4a is a schematic diagram of a merged area in an embodiment
  • FIG. 5 is a schematic flowchart of a lane line recognition method in another embodiment
  • Figure 5a is a schematic diagram of a connected area in an embodiment
  • FIG. 6 is a schematic flowchart of a lane line recognition method in another embodiment
  • FIG. 7 is a schematic flowchart of a lane line recognition method in another embodiment
  • FIG. 8 is a schematic flowchart of a lane line recognition method in another embodiment
  • Figure 8a is a schematic diagram of the intersection of lane lines in an embodiment
  • FIG. 9 is a schematic flowchart of a lane line recognition method in another embodiment.
  • FIG. 10 is a schematic structural diagram of a lane line recognition device provided in an embodiment
  • FIG. 11 is a schematic structural diagram of a lane line recognition device provided in another embodiment.
  • FIG. 12 is a schematic structural diagram of a lane line recognition device provided in another embodiment.
  • FIG. 13 is a schematic structural diagram of a lane line recognition device provided in another embodiment
  • FIG. 14 is a schematic structural diagram of a lane line recognition device provided in another embodiment
  • 15 is a schematic structural diagram of a lane line recognition device provided in another embodiment.
  • 16 is a schematic structural diagram of a lane line recognition device provided in another embodiment
  • Figure 17 is an internal structure diagram of a computer device in an embodiment
  • Fig. 18 schematically shows a block diagram of a computing processing device for executing the method according to the present invention
  • Fig. 19 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present invention.
  • the lane line recognition method, device, device, and storage medium provided in this application are intended to solve the problem of inaccurate lane line determination by traditional methods.
  • the technical solution of the present application and how the technical solution of the present application solves the above-mentioned technical problems will be described in detail through the embodiments and the accompanying drawings.
  • the following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not used to limit the present application. Obviously, the described embodiments are part of the embodiments of the present invention, rather than all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
  • the lane line recognition method provided in this embodiment can be applied to the application environment as shown in FIG. 1.
  • the lane line recognition device 101 provided on the vehicle 100 is used to execute the method steps shown in FIGS. 2-9 below.
  • the lane line recognition method provided in this embodiment can also be applied to the application environment of robot pathfinding in a logistics warehouse, where the robot performs path recognition by recognizing lane lines, which is not limited in the embodiment of this application.
  • the execution body of the lane line recognition method may be a lane line recognition device, which can be implemented as part or all of the lane line recognition terminal through software, hardware, or a combination of software and hardware.
  • Fig. 2 is a schematic flowchart of a lane line recognition method in an embodiment. This embodiment relates to the specific process of obtaining the position information of the lane line by detecting the current frame image. As shown in Figure 2, the method includes the following steps:
  • S101 Detect a current frame image collected by the vehicle, and determine multiple detection frames where lane lines in the current frame image are located.
  • the current frame image may be an image collected by an image acquisition device provided on the vehicle, and the current frame image may include environmental information around the vehicle when the vehicle is running.
  • the image acquisition device is a camera
  • the data it collects is video data
  • the current frame image may be an image corresponding to the current frame in the video data.
  • the detection frame may be an area including a lane line in the current frame image, or a rough selection area of the lane line in the current frame image.
  • the position information of the detection frame can be used to indicate the position of the lane line area in the current frame of the image. It should be noted that the detection frame may be an area smaller than the location where all lane lines are located, that is, a detection frame usually only includes part of the lane lines, rather than all the lane lines.
  • the current frame image collected by the vehicle is detected to determine the multiple detection frames where the lane line in the current frame image is located, it can be realized by image detection technology.
  • the lane line area recognition model can be used to determine the multiple detection frames where the lane lines in the current frame image are located.
  • a frame of image can include multiple lane lines, so the detection frames indicating the same lane line can be connected first to a connected area, and the connected area includes a lane line. That is, when multiple detection frames are acquired, the detection frames indicating the overlap of the lane lines can be connected according to the position information of the detection frames to obtain a connected area.
  • S103 Perform edge detection on the connected area, and determine the position information of the lane line in the connected area.
  • the location information where the lane line is located can be used to indicate the area where the lane line in the environment image is located, and it can mark the lane line in different colors in the environment image.
  • edge detection can be performed on the position in the current frame of the image indicated by the connected area, that is, the edge area with obviously different image pixel gray levels in the connected area is selected to determine the position information of the lane line. .
  • the location information of the lane line is to first detect the current frame image collected by the vehicle, determine the multiple detection frames in the current frame image where the lane line is located, and determine according to the location information of the multiple detection frames Connected area, the connected area includes lane lines, and then the edge detection of the connected area is performed to determine the location information of the lane line in the connected area, that is, the location information of the lane line is first divided into multiple detections of the current frame image Frame, and then connect the detection frame to obtain the connected area including the lane line, and then the edge detection of the connected area is obtained, avoiding the problem of inaccurate position information of the determined lane line when the pixel gray level of the environmental image changes drastically , Improve the accuracy of the location information where the determined lane line is located.
  • the current frame image is input into the lane line classification model to obtain multiple detection frames where the lane line in the current frame image is located; the lane line classification model includes at least two cascaded classifiers.
  • the lane line classification model can be a traditional neural network model.
  • the lane line classification model can be an Adaboost model. Its structure can be as shown in Figure 2a.
  • the lane line classification model can include at least two cascaded classifiers. The classification determines whether the lane line is included in the image.
  • the current frame image of the vehicle can be directly input into the lane line classification model, and the lane line classification model is preset
  • the mapping relationship between the current frame image and the detection frame is output, and multiple detection frames corresponding to the current frame image are output; or the current frame image of the vehicle can be zoomed according to the preset zoom ratio column, so that the zoomed current frame image
  • the size of is matched with the area size that can be recognized by the lane line classification model.
  • the zoomed current frame image is input into the lane line classification model, and the mapping relationship between the current frame image and the detection frame preset in the lane line classification model is output Multiple detection frames corresponding to the current frame image; this embodiment of the present application does not limit this.
  • S201 Perform a zooming operation on the current frame image according to the recognizable area size of the lane line classification model to obtain a zoomed current frame image.
  • the size of the area identified by the traditional neural network model is a fixed size, for example, the fixed size is 20 ⁇ 20, or the fixed size is 30 ⁇ 30.
  • the size of the lane line area in the current frame image collected by the image acquisition device is greater than the above fixed size, when the current frame image is directly input into the lane line classification model, the lane line classification model cannot identify multiple lane lines based on the current frame image Location information of the area.
  • the current frame image can be zoomed through a zooming operation to obtain the zoomed current frame image, so that the size of the lane line area in the zoomed current frame image can match the size of the recognizable area of the lane line classification model.
  • the specific process of obtaining the position information of multiple lane line regions based on the zoomed current frame image and the lane line classification model may be as shown in FIG. 4.
  • the above-mentioned S202 "obtain multiple detection frames where the lane lines in the current frame image are located according to the zoomed current frame image and the lane line classification model" may include the following steps:
  • S301 Perform a sliding window operation on the zoomed current frame image according to the preset sliding window size to obtain multiple images to be recognized.
  • the preset sliding window size can be obtained based on the recognizable area size of the lane line classification model.
  • the preset sliding window size can be the same as the recognizable area size of the lane line classification model, or it can be slightly smaller than the lane line classification.
  • the size of the area that can be recognized by the model is not limited in the embodiment of the present application.
  • a sliding window operation may be performed on the zoomed current frame image to obtain a plurality of to-be-recognized images, wherein the size of the to-be-recognized image is obtained according to the preset sliding window size.
  • the coordinates (0,0) can be used as the starting point and the coordinates (20, 20)
  • the image in the window determined for the end point is used as the first image to be recognized, and then according to the preset sliding window step 2, slide 2 along the x-axis coordinate, and obtain the coordinates (2, 0) as the starting point, and the coordinates (22, 20)
  • the image in the window determined for the end point is used as the second image to be recognized, and the windows are successively slided until the coordinates (780, 580) are the starting point and the image in the window determined by the coordinates (800, 600) as the end point is used as the last image to be recognized. Get multiple images to be recognized.
  • S302 Input a plurality of images to be recognized into the lane line classification model in sequence to obtain multiple detection frames where the lane line is located.
  • the lane line classification model can determine whether the image to be recognized is an image of a lane line through a classifier, wherein the classifier can be at least two cascaded classifiers, when When the last-level classifier determines that the image to be recognized is a lane line image, it can determine the position information corresponding to the image to be recognized as a lane line image as multiple detection frames where the lane line is located, that is, the lane line is located.
  • the multiple detection boxes can be small windows as shown in Figure 4a.
  • the terminal performs a scaling operation on the current frame image according to the area size that can be recognized by the lane line classification model to obtain the zoomed current frame image, and obtains multiple images according to the zoomed current frame image and the lane line classification model
  • the location information of the lane line area makes it impossible to identify the current frame image that does not match the size of the area that can be recognized by the traditional neural network model when the lane line classification model is a traditional neural network model.
  • the traditional neural network model is used as the lane line classification model to obtain the position information of the lane line area of the current frame image.
  • the amount of calculation required is small, so there is no need to use calculations.
  • the high-capacity chip obtains the position information of the lane line area of the current frame image, thereby reducing the cost of the device required for lane line recognition.
  • Fig. 5 is a schematic flowchart of a lane line recognition method in another embodiment. This embodiment relates to the specific process of how to determine the connected area according to the position information of multiple detection frames. As shown in FIG. 5, a possible implementation method of S102 "determine connected areas according to the position information of multiple detection frames" includes the following steps:
  • S401 Combine multiple detection frames according to the position information of the multiple detection frames, and determine a combined area where the multiple detection frames are located.
  • each detection frame includes part of the lane line, and multiple detection frames with overlapping positions usually correspond to a complete lane line. Therefore, multiple detection frames with overlapping positions are combined to obtain multiple detection frames.
  • the merged area where it is located which usually includes a complete lane line.
  • the merged area may be two merged areas as shown in Fig. 4a.
  • S402 Determine connected areas corresponding to multiple detection frames according to the combined area.
  • the connected area may be the largest circumscribed area corresponding to the merged area.
  • the polygon may also be the largest circumscribed circle corresponding to the merged area, or may be the largest circumscribed sector corresponding to the merged area, which is not limited in the embodiment of the present application.
  • the connected area may be the largest circumscribed polygon of the two merged areas as shown in FIG. 5a.
  • the embodiment shown in FIG. 6 can be used to perform edge detection on the connected area to determine the location information of the lane line in the connected area.
  • the above 103" performs edge detection on the connected area to determine the connected area
  • a possible implementation method of "location information of the middle lane line” includes the following steps:
  • S501 Perform edge detection on a connected area to obtain a target edge area.
  • the target edge area can be determined by judging whether the target edge area meets the preset conditions. Whether to include lane lines. When the target edge area meets the preset condition, the location information of the target edge area is used as the location information of the lane line.
  • the degree of change in the width of the lane line can be defined by the width of the distal end of the target edge region being greater than the product of the width of the proximal end and the width coefficient. For example, to specifically determine whether the width of the distal end of the target edge area is smaller than the width of the proximal end, the following formula can be used to determine:
  • the lane line prediction can be determined according to the position information of the lane line in the current frame image. Estimate the area, and use the area image corresponding to the lane line estimation area in the next frame of image as the next frame of image. The following is described in detail through FIG. 8. As shown in FIG. 8, the method further includes the following steps:
  • the warning information is output.
  • the pre-warning conditions include that the vehicle is traveling on a compaction line, or the duration of the vehicle pressing on the dashed line exceeds a preset duration threshold, that is, when the vehicle is traveling on a compaction line and the vehicle is traveling on a compaction line, or
  • the warning information is output.
  • the warning information can be a voice prompt or a bee.
  • the alarm sound may also be flashing lights, which is not limited in the embodiment of the present application.
  • the lane line recognition device provided by the embodiment of the present application can execute the foregoing method embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the first acquiring unit 102 is configured to obtain multiple detection frames where the lane lines in the current frame image are located according to the zoomed current frame image and the lane line classification model.
  • the first acquiring unit 102 is specifically configured to perform a sliding window operation on the zoomed current frame image according to the preset sliding window size to obtain multiple images to be recognized; and input the multiple images to be recognized into the lanes in sequence Line classification model to obtain multiple detection frames where lane lines are located.
  • the lane line recognition device provided by the embodiment of the present application can execute the foregoing method embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 13 is a schematic structural diagram of a lane line recognition device provided in another embodiment. Based on the embodiment shown in any one of FIGS. 10-12, as shown in FIG. 13, the second determining module 30 includes: a detection unit 301 And the second determining unit 302, wherein:
  • the second determining unit 302 is configured to use the location information of the target edge area as the location information of the lane line when the target edge area meets the preset condition.
  • the lane line recognition device provided by the embodiment of the present application can execute the foregoing method embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the lane line recognition device provided by the embodiment of the present application can execute the foregoing method embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 15 is shown based on FIG. 14. Of course, FIG. 15 can also be shown based on any one of FIGS. 10-13, and this is only an example.
  • FIG. 16 is a schematic structural diagram of a lane line recognition device provided in another embodiment.
  • the lane line recognition device further includes: an early warning module 60, of which:
  • the early warning module 60 is specifically configured to determine the driving state of the vehicle according to the position information of the lane line.
  • the driving state of the vehicle includes line driving; if the driving state of the vehicle satisfies a preset warning condition, output warning information.
  • the lane line recognition device provided by the embodiment of the present application can execute the foregoing method embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • a computer device is provided.
  • the computer device may be a terminal device, and its internal structure diagram may be as shown in FIG. 17.
  • the computer equipment includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a lane line recognition method.
  • the processor further implements the following steps when executing the computer program: input the current frame image into the lane line classification model to obtain multiple detection frames where the lane line in the current frame image is located; the lane line classification model includes at least two Cascaded classifiers.
  • the processor further implements the following steps when executing the computer program: according to the recognizable area size of the lane line classification model, the current frame image is scaled to obtain the scaled current frame image; according to the scaled current frame
  • the image and lane line classification model obtains multiple detection frames where the lane line in the current frame image is located.
  • the above-mentioned preset conditions include at least one of the following: the target edge region includes a left edge and a right edge, the distal end width of the target edge region is less than the proximal width, and the distal end width of the target edge region is greater than the proximal end The product of the width and the width factor.
  • Target tracking is performed on the target area image, and the position information of the lane line in the next frame of image is obtained.
  • the processor further implements the following steps when executing the computer program: determining the driving state of the vehicle according to the location information of the lane line, the driving state of the vehicle including line driving; if the driving state of the vehicle meets the preset warning conditions , Output warning information.
  • the above-mentioned early warning condition includes the vehicle driving on a compaction line, or the vehicle pressing the dashed line for a duration that exceeds a preset duration threshold.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • the following steps are implemented: input the current frame image into the lane line classification model to obtain multiple detection frames where the lane line in the current frame image is located; the lane line classification model includes at least two Cascaded classifiers.
  • the following steps are implemented: according to the recognizable area size of the lane line classification model, the current frame image is scaled to obtain the scaled current frame image; according to the scaled current frame The image and lane line classification model obtains multiple detection frames where the lane line in the current frame image is located.
  • the following steps are implemented: according to a preset sliding window size, perform a sliding window operation on the zoomed current frame image to obtain multiple images to be recognized; Input the lane line classification model one by one to obtain multiple detection frames where the lane line is located.
  • the following steps are implemented: according to the position information of the multiple detection frames, the multiple detection frames are combined to determine the combined area where the multiple detection frames are located; and the multiple detection frames are determined according to the combined area. Connected regions corresponding to each detection frame.
  • edge detection is performed on the connected area to obtain the target edge area; when the target edge area meets a preset condition, the location information of the target edge area is taken as the lane The location information of the line.
  • the following steps are implemented: according to the position information of the lane line in the current frame image, target tracking is performed on the next frame image of the current frame image, and the lane line in the next frame image is obtained. Location information.
  • the following steps are implemented: the next frame of image is divided into a plurality of area images; the area image in the next frame of image corresponding to the position information of the lane line in the current frame of image is selected As the target area image;
  • Target tracking is performed on the target area image, and the position information of the lane line in the next frame of image is obtained.
  • the following steps are implemented: according to the position information of the lane line in the current frame image, determine the intersection of the lane line; according to the intersection point of the lane line and the lane line in the current frame image The location information of the lane line is determined to determine the lane line estimation area; the area image corresponding to the lane line estimation area in the next frame image of the current frame image is selected as the next frame image.
  • the driving state of the vehicle is determined according to the location information of the lane line, the driving state of the vehicle includes line driving; if the driving state of the vehicle meets the preset warning condition , Output warning information.
  • the above-mentioned early warning condition includes the vehicle driving on a compaction line, or the vehicle pressing the dashed line for a duration that exceeds a preset duration threshold.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the computing processing device according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • FIG. 18 shows a computing processing device that can implement the method of the present invention.
  • the computing processing device may be a computer device, which traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium.
  • the memory 1020 has a storage space 1030 for executing the program code 1031 of any method step in the above method.
  • the storage space 1030 for program codes may include various program codes 1031 respectively used to implement various steps in the above method. These program codes can be read from or written into one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards, or floppy disks. Such computer program products are usually portable or fixed storage units as described with reference to FIG. 14.
  • the storage unit may have storage segments, storage spaces, etc. arranged similarly to the memory 1020 in the computing processing device of FIG. 19.
  • the program code can be compressed in an appropriate form, for example.
  • the storage unit includes computer-readable code 1031', that is, code that can be read by a processor such as 1010, which, when run by a computing processing device, causes the computing processing device to execute the method described above. The various steps.
  • any reference signs placed between parentheses should not be constructed as a limitation to the claims.
  • the word “comprising” does not exclude the presence of elements or steps not listed in the claims.
  • the word “a” or “an” preceding an element does not exclude the presence of multiple such elements.
  • the invention can be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In the unit claims that list several devices, several of these devices may be embodied in the same hardware item.
  • the use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.
  • the technical features of the above-mentioned embodiments can be combined arbitrarily.

Abstract

本申请涉及一种车道线识别方法、装置、设备和存储介质,车道线所在的位置信息是先对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框,并根据多个检测框的位置信息,确定连通区域,连通区域包括车道线,进而对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息,也即是说,车道线所在的位置信息是先将当前帧图像划分成多个检测框,进而将检测框连通得到包括车道线的连通区域,进而对连通区域进行边缘检测得到的,避免了环境图像的像素灰度剧烈变化时,所确定的车道线所在的位置信息不准确的问题,提高了所确定的车道线所在的位置信息的准确度。

Description

车道线识别方法、装置、设备和存储介质
本申请要求在2019年11月21日提交中国专利局、申请号为201911147428.8、发明名称为“车道线识别方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别技术领域,特别是涉及了一种车道线识别方法、装置、设备和存储介质。
背景技术
随着人工智能技术的蓬勃发展,自动驾驶成为一种可能的驾驶方法,自动驾驶通常是通过摄像头获取车辆周围的环境图像,并利用人工智能技术从环境图像中获取道路信息,以控制车辆根据道路信息行驶。
利用人工智能技术从环境图像中获取的道路信息的过程,通常包括从环境图像中确定车辆行驶路段中的车道线,车道线作为常见的交通标识,包括多种不同类型的车道线。例如,从颜色上划分,车道线包括白线和黄线,从用途上划分,车道线分为虚线、实线、双实线和双虚线。终端在从环境图像中确定车道线时,通常是通过环境图像中不同区域的像素灰度来确定车道线的,例如,将环境图像中像素灰度明显高于周围的区域确定为实线区域。
然而,环境图像的像素灰度会受到图像传感器白平衡算法、不同光照强度和地面反光等因素的影响而变化,导致根据环境图像的像素灰度确定的车道线不准确。
发明内容
基于此,有必要针对传统方法确定的车道线不准确的问题,提供了一种车道线识别方法、装置、设备和存储介质。
第一方面,一种车道线识别方法,该方法包括:
对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框;
根据多个检测框的位置信息,确定连通区域,连通区域包括车道线;
对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息。
在其中一个实施例中,上述对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框,包括:
将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框;车道线分类模型包括至少两个级联的分类器。
在其中一个实施例中,上述将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框,包括:
根据车道线分类模型可识别的区域尺寸,对当前帧图像进行缩放运算,得到缩放后的当前帧图像;
根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框。
在其中一个实施例中,上述根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框,包括:
按照预设的滑窗尺寸,对缩放后的当前帧图像进行滑窗运算,得到多个待识别图像;
将多个待识别图像依次输入车道线分类模型,得到车道线所在的多个检测框。
在其中一个实施例中,上述根据多个检测框的位置信息,确定连通区域,包括:
根据多个检测框的位置信息,对多个检测框进行合并,确定多个检测框所在的合并区域;
根据合并区域,确定多个检测框对应的连通区域。
在其中一个实施例中,上述对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息,包括:
对连通区域进行边缘检测,得到目标边缘区域;
当目标边缘区域满足预设条件时,则将目标边缘区域所在的位置信息作为车道线所在的位置信息。
在其中一个实施例中,上述预设条件包括以下至少其中之一:目标边缘区域包括左边缘和右边缘、目标边缘区域的远端宽度小于近端宽度和目标边缘区域的远端宽度大于近端宽度与宽度系数的乘积。
在其中一个实施例中,该方法还包括:
根据当前帧图像中车道线所在的位置信息,对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息。
在其中一个实施例中,上述根据识别结果,对根据当前帧图像中车道线所在的位置信息,对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息,包括:
将下一帧图像划分为多个区域图像;
选取当前帧图像中车道线所在的位置信息对应的下一帧图像中的区域图像作为目标区域图像;
对目标区域图像进行目标跟踪,获取下一帧图像中的车道线所在的位置信息。
在其中一个实施例中,该方法还包括:
根据当前帧图像中车道线所在的位置信息,确定车道线的交汇点;
根据车道线的交汇点和当前帧图像中车道线所在的位置信息,确定车道线预估区域;
选取当前帧图像的下一帧图像中车道线预估区域对应的区域图像,作为下一帧图像。
在其中一个实施例中,该方法还包括:
根据车道线所在的位置信息确定车辆的行驶状态,车辆的行驶状态包括压线行驶;
若车辆的行驶状态满足预设的预警条件,输出预警信息。
在其中一个实施例中,上述预警条件包括车辆压实线行驶,或车辆压虚线时长超过预设的时长阈值。
第二方面,一种车道线识别装置,该装置包括:
检测模块,用于对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框;
第一确定模块,用于根据多个检测框的位置信息,确定连通区域,连通区域包括车道线;
第二确定模块,用于对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息。
第三方面,一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述车道线识别方法的步骤。
第四方面,一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述车道线识别方法的步骤。
上述车道线识别方法、装置、设备和存储介质,车道线所在的位置信息是先对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框,并根据多个检测框的位置信息,确定连通区域,连通区域包括车道线,进而对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息,也即是说,车道线所在的位置信息是先将当前帧图像划分成多个检测框,进而将检测框连通得到包括车道线的连通区域,进而对连通区域进行边缘检测得到的,避免了环境图像的像素灰度剧烈变化时,所确定的车道线所在的位置信息不准确的问题,提高了所确定的车道线所在的位置信息的准确度。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中车道线识别方法的应用环境的示意图;
图2为一个实施例中车道线识别方法的流程示意图;
图2a为一个实施例中车道线识别模型的结构示意图;
图3为另一个实施例中车道线识别方法的流程示意图;
图4为另一个实施例中车道线识别方法的流程示意图;
图4a为一个实施例中合并区域的示意图;
图5为另一个实施例中车道线识别方法的流程示意图;
图5a为一个实施例中连通区域的示意图;
图6为另一个实施例中车道线识别方法的流程示意图;
图7为另一个实施例中车道线识别方法的流程示意图;
图8为另一个实施例中车道线识别方法的流程示意图;
图8a为一个实施例中车道线的交汇点的示意图;
图9为另一个实施例中车道线识别方法的流程示意图;
图10为一个实施例中提供的车道线识别装置的结构示意图;
图11为另一个实施例中提供的车道线识别装置的结构示意图;
图12为另一个实施例中提供的车道线识别装置的结构示意图;
图13为另一个实施例中提供的车道线识别装置的结构示意图;
图14为另一个实施例中提供的车道线识别装置的结构示意图;
图15为另一个实施例中提供的车道线识别装置的结构示意图;
图16为另一个实施例中提供的车道线识别装置的结构示意图;
图17为一个实施例中计算机设备的内部结构图;
图18示意性地示出了用于执行根据本发明的方法的计算处理设备的框图;
图19示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施例
本申请提供的车道线识别方法、装置、设备和存储介质,旨在解决传统方法确定的车道线不准确的的问题。下面将通过实施例并结合附图具体地对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。应当理解,此处描述的具体实施例仅用以解释本申请,并不用于限定本申请。显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本实施例提供的车道线识别方法,可以适用于如图1所示的应用环境中。其中,车辆100上设置的车道线识别装置101用于执行下述图2-9所示的方法步骤。需要说明的是,本实施例提供的车道线识别方法还可以适用于物流仓库中机器人寻路的应用环境中,其中,机器人通过识 别车道线进行路径识别,本申请实施例对此不做限制。
需要说明的是,本申请实施例提供的车道线识别方法,其执行主体可以是车道线识别装置,该装置可以通过软件、硬件或者软硬件结合的方式实现成为车道线识别终端的部分或者全部。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。
图2为一个实施例中车道线识别方法的流程示意图。本实施例涉及的是通过对当前帧图像进行检测,得到车道线所在的位置信息的具体过程。如图2所示,该方法包括以下步骤:
S101、对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框。
其中,当前帧图像可以是通过车辆上设置的图像采集设备采集到的图像,当前帧图像中可以包括车辆行驶时车辆周围的环境信息。通常图像采集设备为摄像头,其采集的数据是视频数据,也即是说,当前帧图像可以是视频数据中当前帧对应的图像。检测框可以是当前帧图像中包括车道线的区域,是当前帧图像中车道线的粗选区域。检测框的位置信息可以用于指示车道线区域在当前帧图像的位置。需要说明的是,检测框可以是小于全部车道线所在位置的区域,也即是说,一个检测框中通常只包括部分车道线,而不是全部的车道线。
在车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框时,可以通过图像检测技术来实现。例如,可以通过车道线区域识别模型,确定当前帧图像中车道线所在的多个检测框。
S102、根据多个检测框的位置信息,确定连通区域,连通区域包括车道线。
通常,一帧图像中可以包括多个车道线,因此可以先将指示同一车道线的检测框连通,的到一个连通区域,该连通区域中包括一个车道线。也即是,在获取多个检测框时,可以根据检测框的位置信息,将指示车道线重合的检测框进行连通,得到连通区域。
S103、对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息。
车道线所在的位置信息可以用于指示环境图像中的车道线所在区域,其可以在环境图像中用不同的颜色标记车道线。当得到了连通区域时,可以对连通区域指示的当前帧图像中的位置,进行边缘检测,也即是将连通区域中图像像素灰度明显不同的边缘区域选取出来,确定车道线所在的位置信息。
上述车道线识别方法,车道线所在的位置信息是先对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框,并根据多个检测框的位置信息,确定连通区域,连通区域包括车道线,进而对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息,也即是说,车道线所在的位置信息是先将当前帧图像划分成多个检测框,进而将检测框连通得到包括车道线的连通区域,进而对连通区域进行边缘检测得到的,避免了环境图像的像素灰度 剧烈变化时,所确定的车道线所在的位置信息不准确的问题,提高了所确定的车道线所在的位置信息的准确度。
可选地,将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框;车道线分类模型包括至少两个级联的分类器。
车道线分类模型可以传统的神经网络模型,例如车道线分类模型可以是Adaboost模型,其结构可以如图2a所示,该车道线分类模型可以包括至少两个级联的分类器,通过每一级分类判断图像中是否包括车道线。
在将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框时,可以将车辆的当前帧图像直接输入车道线分类模型中,通过车道线分类模型中预设的当前帧图像与检测框的映射关系,输出当前帧图像对应的多个检测框;也可以先根据预设的缩放比列,对车辆的当前帧图像进行缩放运算,使得缩放后的当前帧图像的尺寸与车道线分类模型可识别的区域尺寸相匹配,在将缩放后的当前帧图像输入车道线分类模型中,通过车道线分类模型中预设的当前帧图像与检测框的映射关系,输出当前帧图像对应的多个检测框;本申请实施例对此不做限制。
图3为另一个实施例中车道线识别方法的流程示意图。本实施例涉及的是如何对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框的具体过程。如图3所示,上述S101“对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框”一种可能的实现方法包括以下步骤:
S201、根据车道线分类模型可识别的区域尺寸,对当前帧图像进行缩放运算,得到缩放后的当前帧图像。
当车道线分类模型是传统的神经网络模型时,通过传统的神经网络模型所识别的区域尺寸为固定尺寸,例如,固定尺寸为20×20,或者,固定尺寸为30×30。当通过图像采集设备采集的当前帧图像中的车道线区域的尺寸大于上述固定尺寸时,直接将当前帧图像输入车道线分类模型时,车道线分类模型无法根据当前帧图像识别得到多个车道线区域的位置信息。可以通过缩放运算,对当前帧图像进行缩放,得到缩放后的当前帧图像,使得缩放后的当前帧图像中的车道线区域的尺寸,能够与车道线分类模型可识别的区域的尺寸相匹配。
S202、根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框。
可选地,在具体的根据缩放后的当前帧图像和车道线分类模型,得到多个车道线区域的位置信息的过程,可以如图4所示。如图4所示,上述S202“根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框”一种可能的实现方法包括以下步骤:
S301、按照预设的滑窗尺寸,对缩放后的当前帧图像进行滑窗运算,得到多个待识别图像。
其中,预设的滑窗尺寸可以是根据上述车道线分类模型可识别的区域尺寸得到的,预设的滑窗尺寸可以与车道线分类模型可识别的区域尺寸相同,也可以略小于车道线分类模型可识别的区域尺寸,本申请实施例对此不做限制。可以根据预设的滑窗尺寸,对缩放后的当前帧图像进行滑窗运算,得到多个待识别图像,其中,待识别图像的尺寸是根据预设的滑窗尺寸得到的。例如,缩放后的当前帧图像的尺寸为800×600,预设的滑窗尺寸为20×20,则可以按照预设的滑窗尺寸,将坐标(0,0)为起点,坐标(20,20)为终点确定的窗口内的图像作为第一个待识别图像,进而按照预设的滑窗步进2,沿x轴坐标滑动2,得到坐标(2,0)为起点,坐标(22,20)为终点确定的窗口内的图像作为第二个待识别图像,依次滑窗,直至将坐标(780,580)为起点,坐标(800,600)为终点确定的窗口内的图像作为最后一个待识别图像,得到多个待识别图像。
S302、将多个待识别图像依次输入车道线分类模型,得到车道线所在的多个检测框。
其中,将多个待识别图像依次输入车道线分类模型时,车道线分类模型可以通过分类器判断待识别图像是否是车道线的图像,其中分类器可以是至少两个级联的分类器,当最后一级分类器判断该待识别图像是车道线图像时,可以将判断为是车道线图像的待识别图像对应的位置信息确定为车道线所在的多个检测框,也即是说车道线所在的多个检测框可以是如图4a中所示的小窗口。
上述车道线识别方法,终端根据车道线分类模型可识别的区域尺寸,对当前帧图像进行缩放运算,得到缩放后的当前帧图像,根据缩放后的当前帧图像和车道线分类模型,得到多个车道线区域的位置信息,使得当车道线分类模型为传统的神经网络模型时,无法识别与传统的神经网络模型可识别的区域尺寸不匹配的当前帧图像的情况。同时,由于传统的神经网络模型的结构简单,使得通过传统的神经网络模型作为车道线分类模型,得到当前帧图像的车道线区域的位置信息,所需的计算量较小,因此不需要使用计算能力高的芯片获取当前帧图像的车道线区域的位置信息,进而降低了车道线识别所需的装置的成本。
图5为另一个实施例中车道线识别方法的流程示意图。本实施例涉及的是如何根据多个检测框的位置信息,确定连通区域的具体过程。如图5所示,上述S102“根据多个检测框的位置信息,确定连通区域”一种可能的实现方法包括以下步骤:
S401、根据多个检测框的位置信息,对多个检测框进行合并,确定多个检测框所在的合并区域。
其中,根据检测框的位置信息,确定位置重合的多个检测框,将位置重合的检测框进行合并,得到多个检测框所在的合并区域。基于上述实施例中的描述,每个检测框中包括部分车道线,位置重合的多个检测框通常对应一个完整的车道线,因此将位置重合的多个检测框进行合并,得到多个检测框所在的合并区域,该合并区域中通常包括一个完整的车道线。例如,合并区域可以是如图4a所示的两个合并区域。
S402、根据合并区域,确定多个检测框对应的连通区域。
在上述S401的基础上,当得到了合并区域后,可以对合并区域,进行框体检测,得到多个检测框对应的连通区域,需要说明的是,该连通区域可以是合并区域对应的最大外接多边形,也可以是合并区域对应的最大外接圆形,还可以是合并区域对应的最大外接扇形,本申请实施例对此不做限制。例如,连通区域可以是如图5a所示的两个合并区域的最大外接多边形。
可选地,可以通过图6所示实施例来对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息,如图6所示,上述103“对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息”一种可能的实现方法包括以下步骤:
S501、对连通区域进行边缘检测,得到目标边缘区域。
S502、当目标边缘区域满足预设条件时,则将目标边缘区域所在的位置信息作为车道线所在的位置信息。
其中,当对连通区域进行边缘检测,得到目标边缘区域不准确时,也即是存在目标边缘区域并不是车道线的情况,可以通过判断目标边缘区域是否满足预设条件,来确定目标边缘区域中是否包括车道线。当目标边缘区域满足预设条件时,目标边缘区域所在的位置信息作为车道线所在的位置信息。
可选地,预设条件包括以下至少其中之一:目标边缘区域包括左边缘和右边缘、目标边缘区域的远端宽度小于近端宽度和目标边缘区域的远端宽度大于近端宽度与宽度系数的乘积。
由于在平面图像上,车道线通常是预设宽度的线,因此,当目标边缘区域同时包括左边缘和右边缘时,才能确定目标边缘区域可能为车道线。当目标边缘区域只包括左边缘,或者只包括右边缘时,则目标边缘区域不可能为车道线,是误判。同时,在平面图像中,车道线满足“近粗远细”的原则,因此当目标边缘区域的远端宽度小于近端宽度时,目标边缘区域可能为车道线。进一步地,可以通过目标边缘区域的远端宽度大于近端宽度与宽度系数的乘积来限定车道线的宽度的变化程度。例如,在具体判断目标边缘区域的远端宽度是否小于近端宽度,可以通过下述公式来判断:
length(i)≥length(i+1)and 0.7*length(i)πlength(i+1)
也即是说,当目标边缘区域包括左边缘和右边缘;目标边缘区域的远端宽度小于近端宽度,目标边缘区域为车道线的识别结果。
上述车道线识别方法,终端对连通区域进行边缘检测,得到目标边缘区域,若目标边缘区域满足预设条件,则将目标边缘区域所在的位置信息作为车道线所在的位置信息,其中预设条件用于确定目标边缘区域中是否包括车道线,也即是说,在对连通区域进行边缘检测之后,得到目标边缘区域时,进一步地,通过判断目标边缘区域是否满足预设条件,并将满足预设条件的目标边缘区域所在的位置信息作为车道线所在的位置信息,避免了将目标区域进行边缘提取 得到目标边缘区域所在的位置信息直接作为车道线所在的位置信息时,由于误判导致所确定的车道线所在的位置信息不准确的情况,进一步地提高了确定的车道线所在的位置信息的准确性。
在上述实施例的基础上,在对当前帧图像的下一帧图像进行车道线识别时,可以根据当前帧图像的车道线车道线所在的位置信息,对下一帧图像进行目标跟踪,得到下一帧图像的车道线所在的位置信息。可选地,根据当前帧图像中车道线所在的位置信息,对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息。
其中,当确定了当前帧图像的车道线所在的位置信息时,可以将车道线所在的位置信息中车道线的颜色和亮度,与当前帧图像的下一帧图像进行对比,跟踪下一帧图像中与当前帧图像的车道线的颜色和亮度相匹配的区域,得到下一帧图像中的车道线所在的位置信息。
可选地,可以通过图7所示实施例来对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息,包括以下步骤:
S601、将下一帧图像划分为多个区域图像。
当根据车道线所在的位置信息,对下一帧图像进行目标跟踪时,当下一帧图像中光照发生变化时,例如路面积水导致的反光,导致下一帧图像中存在一个积水区域,该积水区域内的亮度与其他区域明显不同,直接对下一帧图像进行目标跟踪时,容易出现由于积水区域亮度过高导致的误判。此时,可以将下一帧图像划分为多个区域图像,以使每个区域图像中车道线亮度是均匀的,避免出现由于积水区域亮度过高导致的误判。
S602、选取当前帧图像中车道线所在的位置信息对应的下一帧图像中的区域图像作为目标区域图像。
S603、对目标区域图像进行目标跟踪,获取下一帧图像中的车道线所在的位置信息。
上述车道线识别方法,将下一帧图像划分为多个区域图像,选取当前帧图像中车道线所在的位置信息对应的下一帧图像中的区域图像作为目标区域图像,对目标区域图像进行目标跟踪,获取下一帧图像中的车道线所在的位置信息,避免了下一帧图像中存在由于光照变化导致的异常亮度区域,进而避免对异常亮度区域的误判得到的错误的目标区域图像的情况,提高了对目标区域图像进行目标跟踪,得到下一帧图像中的车道线所在的位置信息的准确度。
当确定了当前帧图像中车道线所在的位置信息之后,需要对当前帧图像的下一帧图像确定车道线所在的位置信息时,可以根据当前帧图像中车道线所在的位置信息确定车道线预估区域,并将下一帧图像中车道线预估区域对应的区域图像,作为下一帧图像。下面通过图8来详细说明,如图8所示,该方法还包括以下步骤:
S701、根据当前帧图像中车道线所在的位置信息,确定车道线的交汇点。
通常,车道线是成对出现的,也即是说通常环境图像中的车道线为两条车道线,如图8a所示,两条车道线的延长线上存在一个交点,即为车道线的交汇点。该交汇点通常位于图像的地 平线上。
S702、根据车道线的交汇点和当前帧图像中车道线所在的位置信息,确定车道线预估区域。
当得到了车道线的交汇点时,可以根据车道线的交汇点将当前帧图像分为两个区域。将包括车道线的区域,作为车道线预估区域。当将当前帧图像分为的两个区域为图像的上区域和图像的下区域时,一般来说,由于交汇点通常位于图像的地平线上,也即是说,图像的上区域为天空,图像的下区域为地面,也即是车道线所在的区域。将图像的下区域确定为车道线预估区域。
S703、选取当前帧图像的下一帧图像中车道线预估区域对应的区域图像,作为下一帧的环境图像。
上述车道线识别方法,根据当前帧图像中车道线所在的位置信息,确定车道线的交汇点,根据车道线的交汇点和当前帧图像中车道线所在的位置信息,确定车道线预估区域,选取当前帧图像的下一帧图像中车道线预估区域对应的区域图像,作为下一帧的环境图像,也即是说,下一帧图像中仅包括车道线预估区域,使得在确定下一帧图像中车道线所在的位置信息时,所需计算的数据量小,提高了确定下一帧图像中车道线所在的位置信息的效率。
当确定了车道线的识别结果时,还可以根据识别结果和车辆当前位置信息,确定是否输出预警信息。下面通过图9来详细说明。
图9为另一个实施例中车道线识别方法的流程示意图。本实施例涉及的是根据车道线所在的位置信息和车辆当前位置信息,确定是否输出预警信息的具体过程。如图9所示,该方法还包括以下步骤:
S801、根据车道线所在的位置信息确定车辆的行驶状态,车辆的行驶状态包括压线行驶。
在上述实施例的基础上,当确定了车道线所在的位置信息之后,可以根据获取安装在车辆上的图像采集设备安装的位置信息,计算得到车辆的行驶状态,也即是车辆是否压线行驶。例如,当图像采集设备安装在车辆上时,根据图像采集设备安装在车辆上的位置、车道线识别结果和车辆自身参数,例如,车辆的高度和宽度,确定车辆是否压线行驶。
S802、若车辆的行驶状态满足预设的预警条件,输出预警信息。
当车辆的行驶状态是压线行驶时,且满足预设的预警条件时,输出预警信息。可选地,预警条件包括车辆压实线行驶,或车辆压虚线时长超过预设的时长阈值,也即是说,当车辆的行驶状态是压线行驶时,且车辆是压实线行驶,或者车辆是压线行驶状态,且车辆压虚线的时长超过预设的时长阈值时,车辆的行驶状态满足预设的预警条件,则输出预警信息,其中,预警信息可以是语音提示,也可以是蜂鸣警报,还可以是灯光闪烁,本申请实施例对此不做限制。
上述车道线识别方法,终端根据车道线所在的位置信息和车辆当前位置信息,确定车辆的行驶状态,车辆的行驶状态包括压线行驶,若车辆的行驶状态满足预设的预警条件,输出预警 信息;预警条件包括车辆压实线行驶,或车辆压虚线时长超过预设的时长阈值,使得在车辆压实线,或者压虚线的时长超过预设的时长阈值时,可以输出预警信息,已提示驾驶员,确保行车安全。
应该理解的是,虽然图2-9的流程图中的各个步骤按照箭头的指示,依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-9中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图10为一个实施例中提供的车道线识别装置的结构示意图,如图10所示,该车道线识别装置包括:检测模块10、第一确定模块20和第二确定模块30,其中:
检测模块10,用于检测模块,用于对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框;
第一确定模块20,用于根据多个检测框的位置信息,确定连通区域,连通区域包括车道线;
第二确定模块30,用于对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息。
在一个实施例中,检测模块10具体用于将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框;车道线分类模型包括至少两个级联的分类器。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图11为另一个实施例中提供的车道线识别装置的结构示意图,在图10所示实施例的基础上,如图11所示,检测模块10包括:缩放单元101和第一获取单元102,其中
缩放单元101,用于根据车道线分类模型可识别的区域尺寸,对当前帧图像进行缩放运算,得到缩放后的当前帧图像;
第一获取单元102,用于根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框。
在一个实施例中,第一获取单元102具体用于按照预设的滑窗尺寸,对缩放后的当前帧图像进行滑窗运算,得到多个待识别图像;将多个待识别图像依次输入车道线分类模型,得到车道线所在的多个检测框。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图12为另一个实施例中提供的车道线识别装置的结构示意图,在图10或图11所示实施例的基础上,如图12所示,第一确定模块20包括:合并单元201和第一确定单元202,其中:
合并单元201,用于根据多个检测框的位置信息,对多个检测框进行合并,确定多个检测框所在的合并区域;
第一确定单元202,用于根据合并区域,确定多个检测框对应的连通区域。
需要说明的是,图12是基于图11的基础上进行示出的,当然图12也可以基于图10的基础上进行示出,这里仅是一种示例。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图13为另一个实施例中提供的车道线识别装置的结构示意图,在图10-12任一项所示实施例的基础上,如图13所示,第二确定模块30包括:检测单元301和第二确定单元302,其中:
检测单元301用于对连通区域进行边缘检测,得到目标边缘区域;
第二确定单元302用于当目标边缘区域满足预设条件时,则将目标边缘区域所在的位置信息作为车道线所在的位置信息。
在一个实施例中,上述预设条件包括以下至少其中之一:目标边缘区域包括左边缘和右边缘、目标边缘区域的远端宽度小于近端宽度和目标边缘区域的远端宽度大于近端宽度与宽度系数的乘积。
需要说明的是,图13是基于图12的基础上进行示出的,当然图13也可以基于图10或图11的基础上进行示出,这里仅是一种示例。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图14为另一个实施例中提供的车道线识别装置的结构示意图,在图10-13任一项所示实施例的基础上,如图14所示,该车道线识别装置还包括:跟踪模块40,其中:
跟踪模块40用于根据当前帧图像中车道线所在的位置信息,对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息。
在一个实施例中,跟踪模块40具体用于将下一帧图像划分为多个区域图像;选取当前帧图像中车道线的识别结果对应的下一帧图像中的区域图像作为目标区域图像;对目标区域图像进行目标跟踪,获取下一帧图像中的车道线所在的位置信息。
需要说明的是,图14是基于图13的基础上进行示出的,当然图14也可以基于图10-12任一项的基础上进行示出,这里仅是一种示例。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图15为另一个实施例中提供的车道线识别装置的结构示意图,在图10-14任一项所示实施例的基础上,如图15所示,该车道线识别装置还包括:选取模块50,其中:
选取模块50具体用于根据当前帧图像中车道线所在的位置信息,确定车道线的交汇点;根据车道线的交汇点和当前帧图像中车道线所在的位置信息,确定车道线预估区域;选取当前帧图像的下一帧图像中车道线预估区域对应的区域图像,作为下一帧图像。
需要说明的是,图15是基于图14的基础上进行示出的,当然图15也可以基于图10-13任一项的基础上进行示出,这里仅是一种示例。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图16为另一个实施例中提供的车道线识别装置的结构示意图,在图10-15任一项所示实施例的基础上,如图16所示,该车道线识别装置还包括:预警模块60,其中:
预警模块60具体用于根据车道线所在的位置信息确定车辆的行驶状态,车辆的行驶状态包括压线行驶;若车辆的行驶状态满足预设的预警条件,输出预警信息。
在一个实施例中,上述预警条件包括车辆压实线行驶,或车辆压虚线时长超过预设的时长阈值。
需要说明的是,图16是基于图15的基础上进行示出的,当然图16也可以基于图10-14任一项的基础上进行示出,这里仅是一种示例。
本申请实施例提供的车道线识别装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
关于一种车道线识别装置的具体限定可以参见上文中对车道线识别方法的限定,在此不再赘述。上述车道线识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端设备,其内部结构图可以如图17所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种车道线识别方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图17中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比 图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种终端设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框;
根据多个检测框的位置信息,确定连通区域,连通区域包括车道线;
对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框;车道线分类模型包括至少两个级联的分类器。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据车道线分类模型可识别的区域尺寸,对当前帧图像进行缩放运算,得到缩放后的当前帧图像;根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:按照预设的滑窗尺寸,对缩放后的当前帧图像进行滑窗运算,得到多个待识别图像;将多个待识别图像依次输入车道线分类模型,得到车道线所在的多个检测框。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据多个检测框的位置信息,对多个检测框进行合并,确定多个检测框所在的合并区域;根据合并区域,确定多个检测框对应的连通区域。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:对连通区域进行边缘检测,得到目标边缘区域;当目标边缘区域满足预设条件时,则将目标边缘区域所在的位置信息作为车道线所在的位置信息。
在其中一个实施例中,上述预设条件包括以下至少其中之一:目标边缘区域包括左边缘和右边缘、目标边缘区域的远端宽度小于近端宽度和目标边缘区域的远端宽度大于近端宽度与宽度系数的乘积。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据当前帧图像中车道线所在的位置信息,对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:将下一帧图像划分为多个区域图像;选取当前帧图像中车道线所在的位置信息对应的下一帧图像中的区域图像作为目标区域图像;
对目标区域图像进行目标跟踪,获取下一帧图像中的车道线所在的位置信息。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据当前帧图像中车道线所 在的位置信息,确定车道线的交汇点;根据车道线的交汇点和当前帧图像中车道线所在的位置信息,确定车道线预估区域;选取当前帧图像的下一帧图像中车道线预估区域对应的区域图像,作为下一帧图像。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据车道线所在的位置信息确定车辆的行驶状态,车辆的行驶状态包括压线行驶;若车辆的行驶状态满足预设的预警条件,输出预警信息。
在一个实施例中,上述预警条件包括车辆压实线行驶,或车辆压虚线时长超过预设的时长阈值。
本实施例提供的终端设备,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
对车辆采集的当前帧图像进行检测,确定当前帧图像中的车道线所在的多个检测框;
根据多个检测框的位置信息,确定连通区域,连通区域包括车道线;
对连通区域进行边缘检测,确定连通区域中车道线所在的位置信息。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:将当前帧图像输入车道线分类模型,得到当前帧图像中的车道线所在的多个检测框;车道线分类模型包括至少两个级联的分类器。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:根据车道线分类模型可识别的区域尺寸,对当前帧图像进行缩放运算,得到缩放后的当前帧图像;根据缩放后的当前帧图像和车道线分类模型,得到当前帧图像中的车道线所在的多个检测框。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:按照预设的滑窗尺寸,对缩放后的当前帧图像进行滑窗运算,得到多个待识别图像;将多个待识别图像依次输入车道线分类模型,得到车道线所在的多个检测框。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:根据多个检测框的位置信息,对多个检测框进行合并,确定多个检测框所在的合并区域;根据合并区域,确定多个检测框对应的连通区域。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:对连通区域进行边缘检测,得到目标边缘区域;当目标边缘区域满足预设条件时,则将目标边缘区域所在的位置信息作为车道线所在的位置信息。
在其中一个实施例中,上述预设条件包括以下至少其中之一:目标边缘区域包括左边缘和右边缘、目标边缘区域的远端宽度小于近端宽度和目标边缘区域的远端宽度大于近端宽度与宽度系数的乘积。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:根据当前帧图像中车道线所在的位置信息,对当前帧图像的下一帧图像进行目标跟踪,获取下一帧图像中车道线所在的位置信息。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:将下一帧图像划分为多个区域图像;选取当前帧图像中车道线所在的位置信息对应的下一帧图像中的区域图像作为目标区域图像;
对目标区域图像进行目标跟踪,获取下一帧图像中的车道线所在的位置信息。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:根据当前帧图像中车道线所在的位置信息,确定车道线的交汇点;根据车道线的交汇点和当前帧图像中车道线所在的位置信息,确定车道线预估区域;选取当前帧图像的下一帧图像中车道线预估区域对应的区域图像,作为下一帧图像。
在一个实施例中,计算机程序被处理器执行时实现以下步骤:根据车道线所在的位置信息确定车辆的行驶状态,车辆的行驶状态包括压线行驶;若车辆的行驶状态满足预设的预警条件,输出预警信息。
在一个实施例中,上述预警条件包括车辆压实线行驶,或车辆压虚线时长超过预设的时长阈值。
本实施例提供的计算机可读存储介质,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的计算处理设备中的一些或者全部部件的一 些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。例如,图18示出了可以实现本发明的方法的计算处理设备。该计算处理设备可以为计算机设备,该计算处理设备传统上包括处理器1010和以存储器1020形式的计算机程序产品或者计算机可读介质。存储器1020具有用于执行上述方法中的任何方法步骤的程序代码1031的存储空间1030。例如,用于程序代码的存储空间1030可以包括分别用于实现上面的方法中的各种步骤的各个程序代码1031。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图14所述的便携式或者固定存储单元。该存储单元可以具有与图19的计算处理设备中的存储器1020类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码1031’,即可以由例如诸如1010之类的处理器读取的代码,这些代码当由计算处理设备运行时,导致该计算处理设备执行上面所描述的方法中的各个步骤。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (15)

  1. 一种车道线识别方法,其特征在于,所述方法包括:
    对车辆采集的当前帧图像进行检测,确定所述当前帧图像中的车道线所在的多个检测框;
    根据所述多个检测框的位置信息,确定连通区域,所述连通区域包括所述车道线;
    对所述连通区域进行边缘检测,确定所述连通区域中车道线所在的位置信息。
  2. 根据权利要求1所述方法,其特征在于,所述对车辆采集的当前帧图像进行检测,确定所述当前帧图像中的车道线所在的多个检测框,包括:
    将所述当前帧图像输入车道线分类模型,得到所述当前帧图像中的车道线所在的多个检测框;所述车道线分类模型包括至少两个级联的分类器。
  3. 根据权利要求2所述方法,其特征在于,所述将所述当前帧图像输入车道线分类模型,得到所述当前帧图像中的车道线所在的多个检测框,包括:
    根据所述车道线分类模型可识别的区域尺寸,对所述当前帧图像进行缩放运算,得到缩放后的当前帧图像;
    根据所述缩放后的当前帧图像和所述车道线分类模型,得到所述当前帧图像中的车道线所在的多个检测框。
  4. 根据权利要求3所述方法,其特征在于,所述根据所述缩放后的当前帧图像和所述车道线分类模型,得到所述当前帧图像中的车道线所在的多个检测框,包括:
    按照预设的滑窗尺寸,对所述缩放后的当前帧图像进行滑窗运算,得到多个待识别图像;
    将所述多个待识别图像依次输入所述车道线分类模型,得到所述车道线所在的多个检测框。
  5. 根据权利要求1-4任一项所述方法,其特征在于,所述根据所述多个检测框的位置信息,确定连通区域,包括:
    根据所述多个检测框的位置信息,对所述多个检测框进行合并,确定所述多个检测框所在的合并区域;
    根据所述合并区域,确定所述多个检测框对应的所述连通区域。
  6. 根据权利要求1-4任一项所述方法,其特征在于,所述对所述连通区域进行边缘检测,确定所述连通区域中车道线所在的位置信息,包括:
    对所述连通区域进行所述边缘检测,得到目标边缘区域;
    当所述目标边缘区域满足预设条件时,将所述目标边缘区域所在的位置信息作为所述车道线所在的位置信息。
  7. 根据权利要求6所述方法,其特征在于,所述预设条件包括以下至少其中之一:所述目标边缘区域包括左边缘和右边缘、所述目标边缘区域的远端宽度小于近端宽度和所述目标边缘区域的远端宽度大于所述近端宽度与宽度系数的乘积。
  8. 根据权利要求1-3任一项所述方法,其特征在于,所述方法还包括:
    根据所述当前帧图像中车道线所在的位置信息,对所述当前帧图像的下一帧图像进行目标 跟踪,确定所述下一帧图像中车道线所在的位置信息。
  9. 根据权利要求8所述方法,其特征在于,所述根据所述当前帧图像中车道线所在的位置信息,对所述当前帧图像的下一帧图像进行目标跟踪,确定所述下一帧图像中车道线所在的位置信息,包括:
    将所述下一帧图像划分为多个区域图像;
    选取所述当前帧图像中车道线所在的位置信息对应的所述下一帧图像中的区域图像作为目标区域图像;
    对所述目标区域图像进行目标跟踪,获取所述下一帧图像中的车道线所在的位置信息。
  10. 根据权利要求8所述方法,其特征在于,所述方法还包括:
    根据所述当前帧图像中车道线所在的位置信息,确定所述车道线的交汇点;
    根据所述车道线的交汇点和所述当前帧图像中车道线所在的位置信息,确定车道线预估区域;
    选取所述当前帧图像的下一帧图像中所述车道线预估区域对应的区域图像,作为所述下一帧图像。
  11. 根据权利要求1-4任一项所述方法,其特征在于,所述方法还包括:
    根据所述车道线所在的位置信息确定所述车辆的行驶状态,所述车辆的行驶状态包括压线行驶;
    当所述车辆的行驶状态满足预设的预警条件时,输出预警信息。
  12. 根据权利要求11所述方法,其特征在于,所述预警条件包括所述车辆压实线行驶,或所述车辆压虚线时长超过预设的时长阈值。
  13. 一种车道线识别装置,其特征在于,所述装置包括:
    检测模块,用于对车辆采集的当前帧图像进行检测,确定所述当前帧图像中的车道线所在的多个检测框;
    第一确定模块,用于根据所述多个检测框的位置信息,确定连通区域,所述连通区域包括所述车道线;
    第二确定模块,用于对所述连通区域进行边缘检测,确定所述连通区域中车道线所在的位置信息。
  14. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-12中任一项所述方法的步骤。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-12中任一项所述的方法的步骤。
PCT/CN2020/115390 2019-11-21 2020-09-15 车道线识别方法、装置、设备和存储介质 WO2021098359A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/767,367 US20220375234A1 (en) 2019-11-21 2020-09-15 Lane line recognition method, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911147428.8A CN111160086B (zh) 2019-11-21 2019-11-21 车道线识别方法、装置、设备和存储介质
CN201911147428.8 2019-11-21

Publications (1)

Publication Number Publication Date
WO2021098359A1 true WO2021098359A1 (zh) 2021-05-27

Family

ID=70556048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115390 WO2021098359A1 (zh) 2019-11-21 2020-09-15 车道线识别方法、装置、设备和存储介质

Country Status (3)

Country Link
US (1) US20220375234A1 (zh)
CN (1) CN111160086B (zh)
WO (1) WO2021098359A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160086B (zh) * 2019-11-21 2023-10-13 芜湖迈驰智行科技有限公司 车道线识别方法、装置、设备和存储介质
CN111814746A (zh) * 2020-08-07 2020-10-23 平安科技(深圳)有限公司 一种识别车道线的方法、装置、设备及存储介质
CN114332699B (zh) * 2021-12-24 2023-12-12 中国电信股份有限公司 路况预测方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632140A (zh) * 2013-11-27 2014-03-12 智慧城市系统服务(中国)有限公司 一种车道线检测方法及装置
CN104063869A (zh) * 2014-06-27 2014-09-24 南京通用电器有限公司 一种基于Beamlet变换的车道线检测方法
CN106228125A (zh) * 2016-07-15 2016-12-14 浙江工商大学 基于集成学习级联分类器的车道线检测方法
CN111160086A (zh) * 2019-11-21 2020-05-15 成都旷视金智科技有限公司 车道线识别方法、装置、设备和存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI438729B (zh) * 2011-11-16 2014-05-21 Ind Tech Res Inst 車道偏移警示方法及系統
CN103500322B (zh) * 2013-09-10 2016-08-17 北京航空航天大学 基于低空航拍图像的车道线自动识别方法
CN103630122B (zh) * 2013-10-15 2015-07-15 北京航天科工世纪卫星科技有限公司 一种单目视觉车道线检测方法及其测距方法
CN104036253A (zh) * 2014-06-20 2014-09-10 智慧城市系统服务(中国)有限公司 一种车道线追踪方法及系统
CN104657727B (zh) * 2015-03-18 2018-01-02 厦门麦克玛视电子信息技术有限公司 一种车道线的检测方法
CN105069411B (zh) * 2015-07-24 2019-03-29 深圳市佳信捷技术股份有限公司 道路识别方法和装置
CN105260713B (zh) * 2015-10-09 2019-06-28 东方网力科技股份有限公司 一种车道线检测方法和装置
CN107229908B (zh) * 2017-05-16 2019-11-29 浙江理工大学 一种车道线检测方法
CN109325389A (zh) * 2017-07-31 2019-02-12 比亚迪股份有限公司 车道线识别方法、装置及车辆
CN109543493B (zh) * 2017-09-22 2020-11-20 杭州海康威视数字技术股份有限公司 一种车道线的检测方法、装置及电子设备
CN108875607A (zh) * 2017-09-29 2018-11-23 惠州华阳通用电子有限公司 车道线检测方法、装置以及计算机可读存储介质
CN108038416B (zh) * 2017-11-10 2021-09-24 智车优行科技(北京)有限公司 车道线检测方法及系统
CN109858307A (zh) * 2017-11-30 2019-06-07 高德软件有限公司 一种车道线识别方法和装置
CN109101957B (zh) * 2018-10-29 2019-07-12 长沙智能驾驶研究院有限公司 双目立体数据处理方法、装置、智能驾驶设备及存储介质
CN109949578B (zh) * 2018-12-31 2020-11-24 上海眼控科技股份有限公司 一种基于深度学习的车辆压线违法自动审核方法
CN109886122B (zh) * 2019-01-23 2021-01-29 珠海市杰理科技股份有限公司 车道线检测方法、装置、计算机设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632140A (zh) * 2013-11-27 2014-03-12 智慧城市系统服务(中国)有限公司 一种车道线检测方法及装置
CN104063869A (zh) * 2014-06-27 2014-09-24 南京通用电器有限公司 一种基于Beamlet变换的车道线检测方法
CN106228125A (zh) * 2016-07-15 2016-12-14 浙江工商大学 基于集成学习级联分类器的车道线检测方法
CN111160086A (zh) * 2019-11-21 2020-05-15 成都旷视金智科技有限公司 车道线识别方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN111160086A (zh) 2020-05-15
US20220375234A1 (en) 2022-11-24
CN111160086B (zh) 2023-10-13

Similar Documents

Publication Publication Date Title
WO2021098359A1 (zh) 车道线识别方法、装置、设备和存储介质
US20210213961A1 (en) Driving scene understanding
Liu et al. Condlanenet: a top-to-down lane detection framework based on conditional convolution
US11643076B2 (en) Forward collision control method and apparatus, electronic device, program, and medium
US10489913B2 (en) Methods and apparatuses, and computing devices for segmenting object
CN110378297B (zh) 基于深度学习的遥感图像目标检测方法、装置、及存储介质
KR101546590B1 (ko) 원을 위한 허프 변환
CN110852285A (zh) 对象检测方法、装置、计算机设备和存储介质
JP2022025008A (ja) テキスト行識別に基づくナンバープレート識別方法
Zhang et al. Simultaneous pixel-level concrete defect detection and grouping using a fully convolutional model
CN110675637A (zh) 车辆违法视频的处理方法、装置、计算机设备及存储介质
JP6255944B2 (ja) 画像解析装置、画像解析方法及び画像解析プログラム
CN112712703A (zh) 车辆视频的处理方法、装置、计算机设备和存储介质
Kumar SEAT-YOLO: A squeeze-excite and spatial attentive you only look once architecture for shadow detection
CN110796130A (zh) 用于文字识别的方法、装置及计算机存储介质
US20230368397A1 (en) Method and system for detecting moving object
CN111179212B (zh) 集成蒸馏策略和反卷积的微小目标检测片上实现方法
CN113761981B (zh) 一种自动驾驶视觉感知方法、装置及存储介质
CN114022848B (zh) 一种隧道自动照明的控制方法及系统
US20130279808A1 (en) Complex-object detection using a cascade of classifiers
CN112101139B (zh) 人形检测方法、装置、设备及存储介质
JP6304473B2 (ja) 画像処理システム、画像処理方法及びプログラム
CN112766128A (zh) 交通信号灯检测方法、装置和计算机设备
CN115346143A (zh) 行为检测方法、电子设备、计算机可读介质
Wang et al. G-NET: Accurate Lane Detection Model for Autonomous Vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20888976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20888976

Country of ref document: EP

Kind code of ref document: A1