CN113807125A - Emergency lane occupation detection method and device, computer equipment and storage medium - Google Patents

Emergency lane occupation detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113807125A
CN113807125A CN202010533926.2A CN202010533926A CN113807125A CN 113807125 A CN113807125 A CN 113807125A CN 202010533926 A CN202010533926 A CN 202010533926A CN 113807125 A CN113807125 A CN 113807125A
Authority
CN
China
Prior art keywords
target
emergency lane
detection
vehicle
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010533926.2A
Other languages
Chinese (zh)
Inventor
王文星
王鹏飞
王治金
李京
王向鸿
熊君君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fengchi Shunxing Information Technology Co Ltd
Original Assignee
Shenzhen Fengchi Shunxing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fengchi Shunxing Information Technology Co Ltd filed Critical Shenzhen Fengchi Shunxing Information Technology Co Ltd
Priority to CN202010533926.2A priority Critical patent/CN113807125A/en
Publication of CN113807125A publication Critical patent/CN113807125A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The application relates to an emergency lane occupation detection method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring an image frame; performing emergency lane line segmentation on the image frames; when a target emergency lane line is segmented from the image frames, performing target vehicle detection on the image frames; when a target vehicle is detected from the image frame, performing emergency lane occupation detection on the target vehicle according to the position of the target emergency lane line and the position of the target vehicle; and when the target vehicle is judged to have an emergency lane occupation event, sending emergency lane occupation information corresponding to the target vehicle to a central management platform. By adopting the method, the detection accuracy of the emergency lane occupation can be improved.

Description

Emergency lane occupation detection method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of traffic, in particular to an emergency lane occupation detection method and device, computer equipment and a storage medium.
Background
An emergency lane, also called a life passage, is a lane specially used for emergency vehicles for engineering rescue, fire rescue, medical rescue or policemen to execute emergency services. The random occupation of the emergency lane can bring serious harm to the society and individuals, such as delay of emergency rescue, aggravation of road congestion, easy accident and the like. Therefore, how to detect the emergency lane occupation event is a concern.
At present, a fixed-point detection mode based on a fixed camera is generally adopted, a camera deployed at a position such as an important road section and an accident-prone road section is generally used for collecting and sending a monitoring video, and a server is used for carrying out emergency lane occupation detection based on the monitoring video sent by the camera. However, the emergency lane occupation detection method is limited by the deployment position and the number of the cameras, and the problem that the coverage of emergency lane occupation detection is limited exists, so that the detection accuracy is low.
Disclosure of Invention
In view of the above, it is necessary to provide an emergency lane occupation detection method, an emergency lane occupation detection device, a computer device, and a storage medium, which can improve the accuracy of emergency lane occupation detection.
An emergency lane occupancy detection method, the method comprising:
acquiring an image frame;
performing emergency lane line segmentation on the image frames;
when a target emergency lane line is segmented from the image frames, performing target vehicle detection on the image frames;
when a target vehicle is detected from the image frame, performing emergency lane occupation detection on the target vehicle according to the position of the target emergency lane line and the position of the target vehicle;
and when the target vehicle is judged to have an emergency lane occupation event, sending emergency lane occupation information corresponding to the target vehicle to a central management platform.
In one embodiment, the performing emergency lane line segmentation on the image frame includes:
performing lane line segmentation on the image frame through a trained lane line segmentation model to obtain a lane white and solid line segmentation graph;
screening candidate emergency lane lines based on the lane solid-white line segmentation graph;
and determining a target emergency lane line from the candidate emergency lane lines.
In one embodiment, the determining a target emergency lane line from the candidate emergency lane lines includes:
determining a lane line profile of each candidate emergency lane line;
determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile;
and determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
In one embodiment, the performing target vehicle detection on the image frames includes:
carrying out vehicle detection on the image frames through a trained vehicle detection model to obtain a vehicle detection frame corresponding to each candidate vehicle in the image frames;
and screening target vehicles from the candidate vehicles according to the vehicle detection frame.
In one embodiment, the method further comprises:
determining the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line;
and determining the position of the target vehicle according to the vehicle detection frame corresponding to the target vehicle.
In one embodiment, the detecting emergency lane occupancy of the target vehicle according to the position of the target emergency lane line and the position of the target vehicle includes:
judging whether the target vehicle is positioned on the right side of the target emergency lane line or not according to the position of the target emergency lane line and the position of the target vehicle;
and when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
In one embodiment, the acquiring the image frame includes:
dynamically acquiring positioning information of a target detection vehicle;
and when the target detection vehicle is judged to be in a preset detection road section according to the positioning information, acquiring an image frame corresponding to the target detection vehicle.
An emergency lane occupancy detection device, the device comprising:
the acquisition module is used for acquiring image frames;
the segmentation module is used for performing emergency lane line segmentation on the image frames;
the first detection module is used for detecting a target vehicle in the image frame when a target emergency lane line is segmented from the image frame;
the second detection module is used for detecting the emergency lane occupation of the target vehicle according to the position of the target emergency lane line and the position of the target vehicle when the target vehicle is detected from the image frame;
and the sending module is used for sending the emergency lane occupation information corresponding to the target vehicle to a central management platform when the target vehicle is judged to have the emergency lane occupation event.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above-described method embodiments when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
According to the method, the device, the computer equipment and the storage medium for detecting the emergency lane occupation, the emergency lane line segmentation and the target vehicle detection are sequentially executed, and the target vehicle detection is further performed on the image frame under the condition that the target emergency lane line is segmented from the image frame, so that the additional execution of the related operation of the target vehicle detection can be avoided under the condition that the target emergency lane line does not exist in the image frame, and the detection accuracy and efficiency can be improved. Furthermore, whether the target vehicle has an emergency lane occupation event or not is judged based on the position of the emergency lane line and the position of the target vehicle in the image frame, and the detection efficiency can be further improved under the condition that the detection accuracy is guaranteed.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an emergency lane occupancy detection method;
FIG. 2 is a schematic flow chart illustrating a method for emergency lane occupancy detection according to an embodiment;
fig. 3 is a schematic flow chart of an emergency lane occupancy detection method according to another embodiment;
FIG. 4 is a schematic diagram of an emergency lane occupancy detection method in one embodiment;
FIG. 5 is a block diagram showing the construction of an emergency lane occupancy detection apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment;
fig. 7 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The emergency lane occupation detection method provided by the application can be applied to the application environment shown in fig. 1. Wherein the detection device 102 communicates with the central management platform 104 via a network. The detection device 102 acquires an image frame, performs emergency lane line segmentation on the image frame, performs target vehicle detection on the image frame when a target emergency lane line is segmented from the image frame, performs emergency lane line occupation detection on a target vehicle according to the position of the target emergency lane line and the position of the target vehicle in the image frame when the target vehicle is detected from the image frame, and transmits emergency lane occupation information corresponding to the target vehicle to the central management platform when it is determined that an emergency lane occupation event exists in the target vehicle. The detection device 102 may be a terminal or a server, and fig. 1 illustrates the detection device as a terminal, the terminal may be, but is not limited to, various personal computers, smart phones, portable wearable devices, and other vehicle-mounted devices that can be used to implement emergency lane occupancy detection, and the server may be implemented by an independent server or a server cluster formed by multiple servers. The central management platform 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, an emergency lane occupancy detection method is provided, which is described by taking the example of the method applied to the detection device in fig. 1, and includes the following steps:
at step 202, an image frame is acquired.
The image frame is a video frame extracted from a video, or an image frame extracted from an image set. The image set is a set composed of a plurality of image frames, and may be specifically an image sequence, that is, a sequence composed of a plurality of image frames ordered according to the acquisition time stamp.
In one embodiment, a detection device acquires a video or set of images in real-time and extracts image frames from the acquired video or set of images. The detection device collects videos or image sets in real time through the image collection device. When the detection equipment is the terminal, the image acquisition equipment can be arranged in the terminal as a component part, and can also be externally connected to the terminal as independent equipment. When the detection device is a server, the image capturing device is used as an independent device to communicate with the server as the detection device, and specifically, the image capturing device may communicate in a mobile communication mode such as 4G (the 4th generation mobile communication technology, fourth generation mobile communication technology) and/or a Wireless communication mode such as WIFI (Wireless Fidelity ). Image capture devices include, but are not limited to, cameras, tachographs, and other devices that can be used to capture video or image collections.
In one embodiment, when the detection device is a terminal, the terminal can be fixedly installed on a target detection vehicle, and can also be placed on the target detection vehicle when emergency lane occupation detection is required. It is understood that the object detection vehicle is a vehicle capable of running on a highway, and is not particularly limited herein. The object detection vehicle may particularly be a logistics vehicle, such as a shunfeng logistics vehicle.
In one embodiment, the detection device acquires image frames from a video or an image set according to a preset frequency, and performs the relevant steps of emergency lane occupancy detection provided by the present application based on the currently acquired image frames after each image frame is acquired. The preset frequency is a preset frequency for extracting image frames from videos or image frames and detecting emergency lane occupation based on the extracted image frames, and can be specifically defined according to actual conditions, such as 3 frames/second.
In one embodiment, when the detection device acquires image frames for emergency lane occupancy detection at a preset frequency, the image acquisition device extracts the image frames from a video or image set acquired in real time at the preset frequency and transmits the extracted image frames to the detection device, so that the detection device performs emergency lane occupancy detection based on the received image frames.
And step 204, performing emergency lane line segmentation on the image frame.
Specifically, the detection device divides the lane white solid line from the image frame, and when the lane white solid line is divided from the image frame, performs determination of the target emergency lane line based on the divided lane white solid line. When the image frame includes the target emergency lane line, the detection device can determine the target emergency lane line in the image frame based on the segmented lane white solid line, that is, determine the target emergency lane line from the segmented lane white solid line.
In one embodiment, the detection device performs lane segmentation on the image frame through a trained lane segmentation model to segment a lane white solid line from the image frame, determines whether a target emergency lane line exists in the image frame based on the segmented lane white solid line, and determines the target emergency lane line based on the segmented lane white solid line to segment the target emergency lane line from the image frame when the target emergency lane line is determined to exist in the image frame. It can be understood that after the white solid lines of the lanes are segmented from the image frames, the detection device may directly determine that a target emergency lane line exists in the image frames and determine the target emergency lane line from the white solid lines of the lanes, and the detection device may also screen candidate emergency lane lines that may be used as the target emergency lane line from the white solid lines of the lanes, and determine that the target emergency lane line exists in the image frames when the candidate emergency lane lines are screened, and further determine the target emergency lane line from the screened candidate emergency lane lines.
And step 206, when the target emergency lane line is segmented from the image frame, detecting the target vehicle in the image frame.
Specifically, when the target emergency lane line is segmented from the image frame, it indicates that the target emergency lane line exists in the image frame, that is, the emergency lane exists in the image frame, and the detection device further performs target vehicle detection on the image frame.
In one embodiment, the detection device performs target vehicle detection on the image frames through a trained vehicle detection model to detect a target vehicle in the image frames, so as to further determine whether the target vehicle has an emergency lane occupation event.
And step 208, when the target vehicle is detected from the image frame, performing emergency lane occupation detection on the target vehicle according to the position of the target emergency lane line and the position of the target vehicle.
Specifically, when a target vehicle is detected from the image frame, it indicates that a target vehicle needing further emergency lane occupation detection exists in the image frame, and the detection device performs emergency lane occupation detection on the target vehicle according to the position of each target vehicle in the image frame and the position of a target emergency lane line, so as to determine whether an emergency lane occupation event exists in the target vehicle.
In one embodiment, for each target vehicle, the detection device compares the position of the target vehicle with the position of a target emergency lane line in an image frame, and determines that the target vehicle has an emergency lane occupancy event when the position of the target vehicle is determined to be on the right side of the target emergency lane line according to the comparison result. The detection device performs the emergency lane occupation detection operation for each target vehicle detected from the image frame to determine whether each detected target vehicle has an emergency lane occupation event, so that whether the target vehicle occupying the emergency lane exists in the image frame can be determined.
And step 210, when the target vehicle is judged to have the emergency lane occupation event, sending emergency lane occupation information corresponding to the target vehicle to the central management platform.
The emergency lane occupation information is related information for representing that the target vehicle occupies an emergency lane, and specifically may include an image frame for determining that an emergency lane occupation event exists in the corresponding target vehicle, and may further include one or more of a vehicle detection frame corresponding to the target vehicle and current positioning information, a collection timestamp corresponding to the image frame, a detection device identifier corresponding to the detection device, and the like. The current Positioning information is information for representing a current position of the target vehicle, and may specifically be GPS (Global Positioning System) information. The detection device may use its current positioning information as current positioning information corresponding to the corresponding target vehicle.
Specifically, when it is determined that the target vehicle has an emergency lane occupation event, the detection device acquires emergency lane occupation information corresponding to the target vehicle and sends the acquired emergency lane occupation information to the central management platform.
In an embodiment, if it is determined that the image frame includes a plurality of target vehicles with emergency lane occupancy events in the manner described above, the detection device may respectively obtain corresponding emergency lane occupancy information for each target vehicle occupying the emergency lane, or obtain one emergency lane occupancy information for the plurality of target vehicles occupying the emergency lane, and send the obtained emergency lane occupancy information to the central management platform.
In one embodiment, when it is determined that the target vehicle occupying the emergency lane exists in the image frame in the above manner, the detection device may trigger the alarm information and perform the alarm. The emergency lane warning system can be understood that the detection equipment can send out the starting alarm information to the central management platform, and can broadcast the alarm information to the target vehicle occupying the current emergency lane in a voice broadcast mode so as to prompt the target vehicle occupying the current emergency lane to leave the emergency lane as soon as possible.
According to the method for detecting the emergency lane occupation, the emergency lane line segmentation and the target vehicle detection are sequentially executed, the target vehicle detection is further performed on the image frame under the condition that the target emergency lane line is segmented from the image frame, the additional execution of the related operation of the target vehicle detection can be avoided under the condition that the target emergency lane line does not exist in the image frame, and the detection accuracy and efficiency can be improved. Furthermore, whether the target vehicle has an emergency lane occupation event or not is judged based on the position of the emergency lane line and the position of the target vehicle in the image frame, and the detection efficiency can be further improved under the condition that the detection accuracy is guaranteed.
In one embodiment, step 204 includes: performing lane line segmentation on the image frame through a trained lane line segmentation model to obtain a lane white and solid line segmentation graph; screening candidate emergency lane lines based on the lane white-solid line segmentation graph; and determining a target emergency lane line from the candidate emergency lane lines.
The lane line segmentation model is trained on the basis of a first training sample set acquired in advance and can segment a lane white solid line from an image frame. The first training sample set includes sample image frames and corresponding sample lane solid white line segmentation maps. The lane solid white segmentation map refers to an image in which the lane solid white lines segmented from the corresponding image frame have been marked. The lane solid white line segmentation map may be a grayscale image or a binary image, for example, an area of the lane solid white line marked in the binary image is a pixel value, and other areas except the lane solid white line are another pixel value.
Specifically, the detection device inputs the acquired image frames into a trained lane line segmentation model, and performs lane line segmentation on the image frames through the lane line segmentation model to obtain lane white-solid line segmentation maps corresponding to the image frames. The detection equipment screens candidate emergency lane lines from lane white solid lines marked in the lane white solid line segmentation graph, and determines a target emergency lane line from the screened candidate emergency lane lines.
In one embodiment, the detection device obtains lane line profiles of each lane solid-white line in the lane solid-white line segmentation graph based on the lane solid-white line segmentation graph through a profile algorithm, and screens candidate emergency lane lines based on the lane line profiles. The detection equipment determines the area of the lane line corresponding to the white solid line of the corresponding lane and a lane line detection frame according to the lane line profile, and determines the center point of the lane line and the length of the lane line of the white solid line of the corresponding lane according to the lane line detection frame. And screening candidate emergency lane lines by the detection equipment according to the area of the lane line corresponding to the white line and the solid line of the lane, the center point of the lane line and the length of the lane line. It is understood that the lane line profile may be an irregular shape, the lane line detection box is a regular rectangle, and the lane line detection box is a minimum circumscribed rectangle of the corresponding lane line profile. The detection device can obtain the corresponding lane line contour through the existing contour algorithm based on the marked pixel points of the lane white solid line in the lane white solid line segmentation graph, and correspondingly, the corresponding lane line area can be obtained according to the lane line contour based on the existing area calculation algorithm, which is not repeated herein.
Further, the detection equipment screens the lane white solid lines with the lane line area larger than or equal to the lane line area threshold, the lane line center point on the right side of the image frame and the lane line length larger than or equal to the lane line length threshold as candidate emergency lane lines. The detection device determines the center point of the diagonal line of the lane line detection frame as the center point of the lane line, and determines the length of the lane line detection frame as the length of the lane line. The central point of the lane line is positioned at the right side of the image frame, which means that the central point of the lane line is positioned at the right side of the vertical central axis of the image frame, that is, the abscissa of the central point of the lane line is larger than half of the image width of the image frame. The lane line area threshold, such as 400, and the lane line length threshold, such as 25% of the diagonal length of the image frame, are empirical values obtained through a number of experiments. It is understood that the lane line area threshold and the lane line length threshold are only examples and are not limited to specific limitations. Therefore, the lane white solid lines which are certainly not the target emergency lane lines are removed from the detected lane white implementation, candidate emergency lane lines which are possibly the target emergency lane lines are obtained, the target emergency lane lines are further determined from the candidate emergency lane lines, and the screening accuracy of the target emergency lane lines can be improved.
In one or more embodiments of the present application, the coordinate origin is the top left corner of the image frame, the horizontal direction is the horizontal axis, and the vertical direction or the vertical direction is the vertical axis.
In one embodiment, the training step of the lane line segmentation model comprises: the method comprises the steps of obtaining a plurality of sample images, marking the white solid lines of the lanes in each sample image to obtain a corresponding sample lane white solid line segmentation graph, obtaining a first training sample set according to the sample images and the corresponding sample lane white solid line segmentation graph, and performing model training according to the first training sample set and a preset machine learning algorithm to obtain a trained lane line segmentation model. And performing model training according to a preset machine learning algorithm, namely performing iterative model training by taking the sample image as an input feature and taking the corresponding sample lane white-solid line segmentation graph as an expected output feature until iteration stops to obtain a trained lane line segmentation model. The pre-set machine learning algorithm includes, but is not limited to, a convolutional neural network, such as cgnet.
In an embodiment, the lane solid-white line segmentation graph may be a point graph, that is, a set of pixels, and specifically may include a pixel coordinate and a lane line category corresponding to each pixel in a corresponding image frame, and may also include a pixel coordinate of each pixel in an area to which the lane solid-white line belongs in the corresponding image frame. The lane line categories may include lane white solid lines and others. Therefore, the pixel points corresponding to the lane solid white lines in the corresponding image frames can be determined based on the lane solid white line segmentation graph, namely, the areas corresponding to the lane solid white lines in the corresponding image frames can be determined.
In the above embodiment, the trained lane line segmentation model is used for performing lane line segmentation on the image frames, so that lane white solid lines in the image frames can be rapidly and accurately segmented, and when corresponding target emergency lane lines are determined based on the segmented lane white solid lines, the determination efficiency and accuracy of the target emergency lane lines can be improved.
In one embodiment, determining a target emergency lane line from the candidate emergency lane lines comprises: determining a lane line profile of each candidate emergency lane line; determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile; and determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
Specifically, after screening out candidate emergency lane lines based on the lane white-solid line segmentation graph, the detection device determines lane line profiles of the candidate emergency lane lines respectively. And for each candidate emergency lane line, the detection equipment performs linear fitting based on a preset linear fitting algorithm according to the pixel points on the corresponding lane line profile, and determines the slope and intercept of the corresponding candidate emergency lane line based on the obtained fitting linear. And the detection equipment selects the candidate emergency lane line with the maximum intercept from the screened candidate emergency lane lines as a target emergency lane line based on the intercept corresponding to each candidate emergency lane line.
In an embodiment, the detection device may obtain, by a contour algorithm, a lane line contour corresponding to each candidate emergency lane line based on the selected pixel point set corresponding to each candidate emergency lane line. The lane line profile of the candidate emergency lane line may also be dynamically determined by the detection device during the screening process of the candidate emergency lane line.
In the above embodiment, the target emergency lane line is screened from the candidate emergency lane lines based on the intercept determined by the lane line profile corresponding to the candidate emergency lane line, so that the position of the emergency lane can be determined based on the screened target emergency lane line. It is to be understood that the candidate emergency lane line having the largest intercept in the image frame is determined as the target emergency lane line, that is, the candidate emergency lane line at the rightmost end of the image frame is determined as the target emergency lane line, and thus, the region at the right side of the target emergency lane line in the image frame may be determined as the region including the emergency lane.
In one embodiment, the target vehicle detection is performed on image frames, comprising: vehicle detection is carried out on the image frames through the trained vehicle detection model, and a vehicle detection frame corresponding to each candidate vehicle in the image frames is obtained; and screening the target vehicles from the candidate vehicles according to the vehicle detection frame.
Wherein the vehicle detection frame corresponds to a candidate vehicle in the image frame, the vehicle detection frame being operable to identify a location of the respective candidate vehicle in the image frame. The vehicle detection frames correspond to vehicle detection frame data that can be used to uniquely determine the respective vehicle detection frame in the image frame. The vehicle detection frame data may specifically include coordinates of a center point, a height, and a width of the vehicle detection frame, or coordinates of an upper left corner and a lower right corner of the vehicle detection frame, which are not specifically described herein.
Specifically, the detection device inputs an image frame into a trained vehicle detection model, and performs vehicle detection on the image frame through the vehicle detection model to obtain a vehicle detection frame corresponding to each candidate vehicle in the image frame. And aiming at each candidate vehicle in the image frame, the detection equipment determines the corresponding vehicle area, the detection frame central point, the aspect ratio and the height according to the corresponding vehicle detection frame, and screens the target vehicle from the detected candidate vehicles according to the vehicle area, the detection frame central point, the aspect ratio and the height corresponding to each vehicle detection frame.
In one embodiment, the detection apparatus determines an area of the vehicle detection frame as a corresponding vehicle area. The detection equipment screens candidate vehicles with vehicle areas larger than or equal to a vehicle area threshold value, detection frame center points, aspect ratios and heights according to the vehicle areas, the detection frame center points, the aspect ratios and the heights corresponding to the vehicle detection frames, wherein the detection frame center points are located on the right side of the vertical center axis of the image frame, the aspect ratios are smaller than or equal to an aspect ratio threshold value, and the heights are smaller than or equal to a height threshold value, and the candidate vehicles are used as target vehicles. Where the vehicle area threshold, the aspect ratio threshold, and the height threshold are empirical values obtained through a number of experiments, such as a vehicle area threshold of 400, an aspect ratio threshold of 2, and a height threshold of two-thirds the height of the image frame. In this way, candidate vehicles which are not necessarily in the emergency lane and candidate vehicles which are not actually vehicles but are falsely detected as vehicles are removed from the candidate vehicles based on the vehicle detection frame, so that unnecessary emergency lane occupation detection is prevented from being further performed on the candidate vehicles, and the detection accuracy and efficiency can be improved.
For example, if the vehicle inspection frame data corresponding to the vehicle inspection frame includes the coordinates of the upper left corner and the lower right corner, the coordinates of the upper left corner are (X1, Y1), and the coordinates of the lower right corner are (X2, Y2), the area of the vehicle corresponding to the vehicle inspection frame is (X2-X1) (Y2-Y1), the coordinates of the center point of the inspection frame are ((X2+ X1)/2, (Y2+ Y1)/2), the width is (X2-X1), the height is (Y2-Y1), and the aspect ratio is (X2-X1)/(Y2-Y1).
In one embodiment, the training step of the vehicle detection model comprises: and obtaining a second training sample set, wherein the second training sample set comprises sample image frames and sample vehicle detection frames obtained by labeling each candidate vehicle in the sample image frames, the sample image frames are used as input features, and all the sample vehicle detection frames corresponding to the sample image frames are used as expected output features to perform model training to obtain a trained vehicle detection model. Machine learning algorithms involved in the vehicle detection model training process include, but are not limited to, convolutional neural networks such as yolov 3.
In the above embodiment, the trained vehicle detection model is used for detecting the target vehicle in the image frame, so that the vehicle detection frame corresponding to each candidate vehicle in the image frame can be quickly and accurately detected, and the target vehicle is screened from the detected candidate vehicles based on the vehicle detection frame, so that the further emergency lane occupation detection is performed based on the screened target vehicle, instead of directly performing the further emergency lane occupation detection on all candidate vehicles in the image frame, and the efficiency and the accuracy of the emergency lane occupation detection can be improved.
In one embodiment, the emergency lane occupancy detection method further includes: determining the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line; and determining the position of the target vehicle according to the vehicle detection frame corresponding to the target vehicle.
Specifically, the detection device obtains a fitted straight line corresponding to the target emergency lane line through fitting based on the lane line fitting manner provided in the above one or more embodiments, determines the slope of the fitted straight line as the slope of the corresponding target emergency lane line, and determines the intercept of the fitted straight line as the intercept of the corresponding target emergency lane line. And the detection equipment obtains the position of the target emergency lane line in the image frame according to the slope and the intercept of the target emergency lane line. It is understood that, since the fitted straight line is obtained based on the contour fitting of the lane line of the target emergency lane line, the position of the fitted straight line in the image frame may be used as the position of the corresponding target emergency lane line in the image frame. After the vehicle detection frame corresponding to each target vehicle in the image frame is obtained, the position of the specified point in the vehicle detection frame in the image frame is used as the position of the corresponding target vehicle in the image frame. A specified point such as the center point of the bottom edge (lower edge) of the vehicle detection frame.
For example, if the vehicle detection frame data corresponding to the vehicle detection frame includes an upper left corner coordinate and a lower right corner coordinate, the upper left corner coordinate is (X1, Y1), and the lower right corner coordinate is (X2, Y2), the coordinate of the center point of the bottom side of the vehicle detection frame is ((X1+ X2)/2, Y2), and if the position of the center point of the bottom side of the vehicle detection frame in the image frame is taken as the position of the corresponding target vehicle in the image frame, the position of the target vehicle is ((X1+ X2)/2, Y2).
In the above embodiment, the position of the target emergency lane line is determined based on the slope and intercept corresponding to the target emergency lane line, and the position of the corresponding target vehicle is determined based on the vehicle detection frame, so that the emergency lane occupancy detection is performed based on the position of the target emergency lane line and the position of the target vehicle, and the detection accuracy can be improved.
In one embodiment, the emergency lane occupancy detection of the target vehicle according to the position of the target emergency lane line and the position of the target vehicle includes: judging whether the target vehicle is positioned on the right side of the target emergency lane line or not according to the position of the target emergency lane line and the position of the target vehicle; and when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
Specifically, the detection device compares the position of the target emergency lane line in the image frame with the position of the target vehicle in the image frame to determine whether the target vehicle is on the right side of the target emergency lane line according to the comparison result. When the target vehicle is judged to be positioned on the right side of the target emergency lane line, the target vehicle is shown to be positioned in an emergency lane, and the detection equipment judges that the target vehicle has an emergency lane occupation event.
In one embodiment, the position of the target emergency lane line is characterized by the position of the corresponding fitted straight line, i.e., by the straight line equation of the fitted straight line. The position of the target vehicle is characterized by the position of the designated point in the corresponding vehicle detection frame, that is, by the coordinates of the designated point in the image frame. The detection equipment determines a pixel point corresponding to the ordinate on the target emergency lane line and an abscissa corresponding to the pixel point according to the ordinate of the position of the target vehicle and a linear equation corresponding to the target emergency lane line, namely, the abscissa corresponding to the pixel point of which the ordinate on the target emergency lane line is consistent with the ordinate of the position of the target vehicle is determined. The detection device compares the abscissa determined from the target emergency lane line with the abscissa of the position of the target vehicle, and when the abscissa of the position of the target vehicle is greater than the abscissa correspondingly determined from the target emergency lane line, the target vehicle is determined to be on the right side of the target emergency lane line, and an emergency lane occupation event of the target vehicle is determined to exist.
In one embodiment, the detection device determines a horizontal straight line having a vertical coordinate coincident with a vertical coordinate of the position of the target vehicle in the image frame, determines an intersection point between a fitted straight line corresponding to the target emergency lane line and the horizontal straight line, and determines a relative positional relationship between the intersection point and the position of the target vehicle in the image frame. And when the position of the target vehicle is positioned at the right side of the intersection point, judging that the target vehicle has an emergency lane occupation event.
In the embodiment, whether the target vehicle occupies the emergency lane is detected according to the respective positions of the target emergency lane line and the target vehicle in the image frame, and the detection efficiency and accuracy can be improved under the condition of reducing the operation complexity of emergency lane occupation detection.
In one embodiment, step 202 comprises: dynamically acquiring positioning information of a target detection vehicle; and when the target detection vehicle is judged to be in the preset detection road section according to the positioning information, acquiring an image frame corresponding to the target detection vehicle.
When the detection device is a terminal, the target detection vehicle refers to a vehicle in which the terminal is located when the terminal executes emergency lane occupation detection operation. When the detection device is a server, the target detection vehicle is a vehicle loaded with an image acquisition device for acquiring a video or image set and sending the acquired video or image set to the server. The Positioning information is information for representing the current position of the target detection vehicle, and may specifically be GPS (Global Positioning System) information. The preset detection road section refers to a preset road section which needs emergency lane occupation detection operation, and specifically may refer to a road section provided with an emergency lane.
Specifically, the detection device dynamically acquires the current positioning information of the target detection vehicle and compares the acquired positioning information with a preset detection road section which is preset in advance. When the positioning information is matched with a preset detection road section, the target detection vehicle is indicated to be in the preset detection road section currently, and the detection equipment acquires an image frame for emergency lane occupation detection from a video or image set corresponding to the target detection vehicle so as to execute corresponding emergency lane occupation detection based on the image frame.
In the above embodiment, when the target detection vehicle is determined to be in the preset detection road section according to the current positioning information and the preset detection road section of the target detection vehicle, the image frame corresponding to the target detection vehicle and used for emergency lane occupation detection is acquired, so as to avoid performing unnecessary emergency lane occupation detection operation on the road section not including the emergency lane.
As shown in fig. 3, in an embodiment, an emergency lane occupancy detection method is provided, which specifically includes the following steps:
step 302, dynamically obtaining the positioning information of the target detection vehicle.
And 304, acquiring an image frame corresponding to the target detection vehicle when the target detection vehicle is judged to be in the preset detection road section according to the positioning information.
And step 306, performing lane line segmentation on the image frame through the trained lane line segmentation model to obtain a lane white and solid line segmentation map.
And 308, screening candidate emergency lane lines based on the lane white-solid line segmentation graph.
At step 310, a lane line profile for each candidate emergency lane line is determined.
Step 312, determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile.
And step 314, determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
And step 316, determining the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line.
And step 318, when the target emergency lane line is segmented from the image frame, performing vehicle detection on the image frame through the trained vehicle detection model to obtain a vehicle detection frame corresponding to each candidate vehicle in the image frame.
And step 320, screening the target vehicles from the candidate vehicles according to the vehicle detection frame.
And step 322, when the target vehicle is detected from the image frames, determining the position of the target vehicle according to the vehicle detection frame corresponding to the target vehicle.
Step 324, determining whether the target vehicle is located on the right side of the target emergency lane line according to the position of the target emergency lane line and the position of the target vehicle.
And step 326, when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
And 328, when the target vehicle is judged to have the emergency lane occupation event, sending the emergency lane occupation information corresponding to the target vehicle to the central management platform.
In the above embodiment, when it is dynamically determined that the target detection vehicle is located in the preset detection road segment, the image frame corresponding to the target detection vehicle is acquired, so as to trigger dynamic emergency lane occupation detection based on the image frame. The lane white solid lines are segmented from the image frames through the lane line segmentation model, the segmentation efficiency and accuracy of the lane white solid lines can be improved, and the target emergency lane lines are screened from the segmented lane white solid lines on the basis of lane line outlines, intercept and the like, so that the screening accuracy of the target emergency lane lines can be improved. Furthermore, after the target emergency lane line is segmented from the image frame, the candidate vehicles are detected from the image frame through the vehicle detection model, and the target vehicles are screened based on the vehicle detection frames of the candidate vehicles, so that the target vehicles can be further subjected to emergency lane occupation detection based on the positions of the target vehicles and the positions of the target emergency lane line, the emergency lane occupation detection does not need to be performed on each candidate vehicle in the image frame, and the detection efficiency and accuracy can be improved.
It can be understood that the method for detecting emergency lane occupancy of the expressway based on machine vision is realized by the method for detecting emergency lane occupancy provided in one or more of the above embodiments. When the detection equipment is a server, the image acquisition equipment is deployed in the target detection vehicle, the image acquisition equipment dynamically acquires videos or image sets, and sends the acquired videos or image sets to the server serving as the detection equipment, so that the server can realize the mobile detection of the occupation of the emergency lane based on the received videos or image sets. When the detection device is a terminal, the terminal serving as the detection device is deployed on a target detection vehicle, the terminal locally realizes emergency lane occupation detection based on the acquired image frames, and the image acquisition device does not need to send acquired videos or image frames to a server serving as the detection device before emergency lane occupation detection operation, so that the detection efficiency can be further improved and the detection cost can be reduced under the condition of ensuring the detection accuracy.
In one embodiment, the target detection vehicle can run on any lane except the emergency lane, and if the target detection vehicle runs on the rightmost lane except the emergency lane, the accuracy of emergency lane occupation detection can be further improved.
Fig. 4 is a schematic diagram illustrating an emergency lane occupancy detection method according to an embodiment. As shown in fig. 4, when the emergency lane occupancy detection process starts, the detection device acquires an image frame, performs lane line segmentation on the image frame to obtain a lane white solid line segmentation map, performs post-processing based on the lane white solid line segmentation map, and determines whether a target emergency lane line exists in the image frame, performs vehicle detection on the image frame to obtain a vehicle detection frame of a candidate vehicle when it is determined that the target emergency lane line exists in the image frame, performs post-processing based on the vehicle detection frame, and determines whether a target vehicle exists in the image frame, performs emergency lane occupancy detection on the target vehicle according to a position of the target emergency lane line and a position of the target vehicle when it is determined that the target vehicle exists in the image frame to determine whether the target vehicle is located in an emergency lane, triggers an alarm when it is determined that the target vehicle is located in the emergency lane, and outputs emergency lane occupancy information of an illegal vehicle, and reporting the emergency lane occupation information of the illegal vehicle to a central management platform, and finishing the current emergency lane occupation detection process. Correspondingly, when the target emergency lane line does not exist in the image frame, or the target vehicle in the emergency lane does not exist in the image frame, the alarm is not triggered, the image frame and the corresponding emergency lane occupation detection result are reported to the central management platform, and the current emergency lane occupation detection process is finished.
It can be understood that when it is determined that there is no target vehicle in the image frame, or when it is determined that there is no target vehicle in the emergency lane in the image frame, the subsequent operation may not be performed on the currently acquired image frame any more, for example, the image frame may also be directly discarded, and it is not necessary to report the image frame and the corresponding emergency lane occupancy detection result to the central management platform.
It should be understood that although the various steps in the flow charts of fig. 2-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 5, there is provided an emergency lane occupancy detection apparatus 500 including: an obtaining module 501, a dividing module 502, a first detecting module 503, a second detecting module 504, and a sending module 505, wherein:
an obtaining module 501, configured to obtain an image frame;
a segmentation module 502, configured to perform emergency lane line segmentation on the image frames;
the first detection module 503 is configured to perform target vehicle detection on the image frames when a target emergency lane line is segmented from the image frames;
the second detection module 504 is configured to, when a target vehicle is detected from the image frames, perform emergency lane occupancy detection on the target vehicle according to a position of a target emergency lane line and a position of the target vehicle;
and a sending module 505, configured to send emergency lane occupation information corresponding to the target vehicle to the central management platform when it is determined that the target vehicle has an emergency lane occupation event.
In one embodiment, the segmentation module 502 is further configured to perform lane line segmentation on the image frame through a trained lane line segmentation model to obtain a lane white-solid line segmentation map; screening candidate emergency lane lines based on the lane white-solid line segmentation graph; and determining a target emergency lane line from the candidate emergency lane lines.
In one embodiment, the segmentation module 502 is further configured to determine a lane line profile for each candidate emergency lane line; determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile; and determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
In one embodiment, the first detecting module 503 is further configured to perform vehicle detection on the image frames through a trained vehicle detection model, so as to obtain a vehicle detection frame corresponding to each candidate vehicle in the image frames; and screening the target vehicles from the candidate vehicles according to the vehicle detection frame.
In one embodiment, the segmentation module 502 is further configured to determine the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line; the first detecting module 503 is further configured to determine a position of the target vehicle according to the vehicle detecting frame corresponding to the target vehicle.
In one embodiment, the second detecting module 504 is further configured to determine whether the target vehicle is located on the right side of the target emergency lane line according to the position of the target emergency lane line and the position of the target vehicle; and when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
In one embodiment, the obtaining module 501 is further configured to dynamically obtain positioning information of the target detection vehicle; and when the target detection vehicle is judged to be in the preset detection road section according to the positioning information, acquiring an image frame corresponding to the target detection vehicle.
For specific definition of the emergency lane occupancy detection device, reference may be made to the above definition of the emergency lane occupancy detection method, which is not described herein again. All modules in the emergency lane occupation detection device can be completely or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server as the detection device, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing image frames. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an emergency lane occupancy detection method.
In one embodiment, a computer device is provided, which may be a terminal as the detection device, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an emergency lane occupancy detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 6 and 7 are merely block diagrams of some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: acquiring an image frame; carrying out emergency lane line segmentation on the image frame; when a target emergency lane line is segmented from the image frame, carrying out target vehicle detection on the image frame; when a target vehicle is detected from the image frame, performing emergency lane occupation detection on the target vehicle according to the position of a target emergency lane line and the position of the target vehicle; and when the target vehicle is judged to have an emergency lane occupation event, sending emergency lane occupation information corresponding to the target vehicle to the central management platform.
In one embodiment, the processor, when executing the computer program, further performs the steps of: performing lane line segmentation on the image frame through a trained lane line segmentation model to obtain a lane white and solid line segmentation graph; screening candidate emergency lane lines based on the lane white-solid line segmentation graph; and determining a target emergency lane line from the candidate emergency lane lines.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining a lane line profile of each candidate emergency lane line; determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile; and determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
In one embodiment, the processor, when executing the computer program, further performs the steps of: vehicle detection is carried out on the image frames through the trained vehicle detection model, and a vehicle detection frame corresponding to each candidate vehicle in the image frames is obtained; and screening the target vehicles from the candidate vehicles according to the vehicle detection frame.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line; and determining the position of the target vehicle according to the vehicle detection frame corresponding to the target vehicle.
In one embodiment, the processor, when executing the computer program, further performs the steps of: judging whether the target vehicle is positioned on the right side of the target emergency lane line or not according to the position of the target emergency lane line and the position of the target vehicle; and when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
In one embodiment, the processor, when executing the computer program, further performs the steps of: dynamically acquiring positioning information of a target detection vehicle; and when the target detection vehicle is judged to be in the preset detection road section according to the positioning information, acquiring an image frame corresponding to the target detection vehicle.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring an image frame; carrying out emergency lane line segmentation on the image frame; when a target emergency lane line is segmented from the image frame, carrying out target vehicle detection on the image frame; when a target vehicle is detected from the image frame, performing emergency lane occupation detection on the target vehicle according to the position of a target emergency lane line and the position of the target vehicle; and when the target vehicle is judged to have an emergency lane occupation event, sending emergency lane occupation information corresponding to the target vehicle to the central management platform.
In one embodiment, the computer program when executed by the processor further performs the steps of: performing lane line segmentation on the image frame through a trained lane line segmentation model to obtain a lane white and solid line segmentation graph; screening candidate emergency lane lines based on the lane white-solid line segmentation graph; and determining a target emergency lane line from the candidate emergency lane lines.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a lane line profile of each candidate emergency lane line; determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile; and determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
In one embodiment, the computer program when executed by the processor further performs the steps of: vehicle detection is carried out on the image frames through the trained vehicle detection model, and a vehicle detection frame corresponding to each candidate vehicle in the image frames is obtained; and screening the target vehicles from the candidate vehicles according to the vehicle detection frame.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line; and determining the position of the target vehicle according to the vehicle detection frame corresponding to the target vehicle.
In one embodiment, the computer program when executed by the processor further performs the steps of: judging whether the target vehicle is positioned on the right side of the target emergency lane line or not according to the position of the target emergency lane line and the position of the target vehicle; and when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
In one embodiment, the computer program when executed by the processor further performs the steps of: dynamically acquiring positioning information of a target detection vehicle; and when the target detection vehicle is judged to be in the preset detection road section according to the positioning information, acquiring an image frame corresponding to the target detection vehicle.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An emergency lane occupancy detection method, characterized in that the method comprises:
acquiring an image frame;
performing emergency lane line segmentation on the image frames;
when a target emergency lane line is segmented from the image frames, performing target vehicle detection on the image frames;
when a target vehicle is detected from the image frame, performing emergency lane occupation detection on the target vehicle according to the position of the target emergency lane line and the position of the target vehicle;
and when the target vehicle is judged to have an emergency lane occupation event, sending emergency lane occupation information corresponding to the target vehicle to a central management platform.
2. The method of claim 1, wherein the emergency lane line segmentation of the image frames comprises:
performing lane line segmentation on the image frame through a trained lane line segmentation model to obtain a lane white and solid line segmentation graph;
screening candidate emergency lane lines based on the lane solid-white line segmentation graph;
and determining a target emergency lane line from the candidate emergency lane lines.
3. The method of claim 2, wherein determining a target emergency lane line from the candidate emergency lane lines comprises:
determining a lane line profile of each candidate emergency lane line;
determining the intercept corresponding to the corresponding candidate emergency lane line based on the lane line profile;
and determining a target emergency lane line from the candidate emergency lane lines according to the determined intercept.
4. The method of claim 1, wherein the performing target vehicle detection on the image frames comprises:
carrying out vehicle detection on the image frames through a trained vehicle detection model to obtain a vehicle detection frame corresponding to each candidate vehicle in the image frames;
and screening target vehicles from the candidate vehicles according to the vehicle detection frame.
5. The method of claim 1, further comprising:
determining the position of the target emergency lane line according to the intercept and the slope corresponding to the target emergency lane line;
and determining the position of the target vehicle according to the vehicle detection frame corresponding to the target vehicle.
6. The method of claim 1, wherein the emergency lane occupancy detection of the target vehicle according to the position of the target emergency lane line and the position of the target vehicle comprises:
judging whether the target vehicle is positioned on the right side of the target emergency lane line or not according to the position of the target emergency lane line and the position of the target vehicle;
and when the target vehicle is positioned at the right side of the target emergency lane line, judging that the target vehicle has an emergency lane occupation event.
7. The method of any of claims 1 to 6, wherein said acquiring an image frame comprises:
dynamically acquiring positioning information of a target detection vehicle;
and when the target detection vehicle is judged to be in a preset detection road section according to the positioning information, acquiring an image frame corresponding to the target detection vehicle.
8. An emergency lane occupancy detection device, characterized in that the device comprises:
the acquisition module is used for acquiring image frames;
the segmentation module is used for performing emergency lane line segmentation on the image frames;
the first detection module is used for detecting a target vehicle in the image frame when a target emergency lane line is segmented from the image frame;
the second detection module is used for detecting the emergency lane occupation of the target vehicle according to the position of the target emergency lane line and the position of the target vehicle when the target vehicle is detected from the image frame;
and the sending module is used for sending the emergency lane occupation information corresponding to the target vehicle to a central management platform when the target vehicle is judged to have the emergency lane occupation event.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010533926.2A 2020-06-12 2020-06-12 Emergency lane occupation detection method and device, computer equipment and storage medium Pending CN113807125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010533926.2A CN113807125A (en) 2020-06-12 2020-06-12 Emergency lane occupation detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010533926.2A CN113807125A (en) 2020-06-12 2020-06-12 Emergency lane occupation detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113807125A true CN113807125A (en) 2021-12-17

Family

ID=78892159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010533926.2A Pending CN113807125A (en) 2020-06-12 2020-06-12 Emergency lane occupation detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113807125A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855759A (en) * 2012-07-05 2013-01-02 中国科学院遥感应用研究所 Automatic collecting method of high-resolution satellite remote sensing traffic flow information
CN105740837A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Unmanned aerial vehicle-based illegal emergency lane occupancy detection method
CN105741559A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Emergency vehicle lane illegal occupation detection method based on lane line model
CN105740836A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Illegal emergency lane occupancy detection method
CN107705552A (en) * 2016-08-08 2018-02-16 杭州海康威视数字技术股份有限公司 A kind of Emergency Vehicle Lane takes behavioral value method, apparatus and system
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855759A (en) * 2012-07-05 2013-01-02 中国科学院遥感应用研究所 Automatic collecting method of high-resolution satellite remote sensing traffic flow information
CN105740837A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Unmanned aerial vehicle-based illegal emergency lane occupancy detection method
CN105741559A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Emergency vehicle lane illegal occupation detection method based on lane line model
CN105740836A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Illegal emergency lane occupancy detection method
CN107705552A (en) * 2016-08-08 2018-02-16 杭州海康威视数字技术股份有限公司 A kind of Emergency Vehicle Lane takes behavioral value method, apparatus and system
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video

Similar Documents

Publication Publication Date Title
CN110414313B (en) Abnormal behavior alarming method, device, server and storage medium
CN108985162A (en) Object real-time tracking method, apparatus, computer equipment and storage medium
CN113470374B (en) Vehicle overspeed monitoring method and device, computer equipment and storage medium
CN105493502A (en) Video monitoring method, video monitoring system, and computer program product
US9460367B2 (en) Method and system for automating an image rejection process
WO2013186662A1 (en) Multi-cue object detection and analysis
CN112163543A (en) Method and system for detecting illegal lane occupation of vehicle
CN110929589B (en) Method, apparatus, computer apparatus and storage medium for identifying vehicle characteristics
CN110838230B (en) Mobile video monitoring method, monitoring center and system
CN113139403A (en) Violation behavior identification method and device, computer equipment and storage medium
CN110225236B (en) Method and device for configuring parameters for video monitoring system and video monitoring system
CN112733598A (en) Vehicle law violation determination method and device, computer equipment and storage medium
CN112101202A (en) Construction scene recognition method and device and electronic equipment
CN114283383A (en) Smart city highway maintenance method, computer equipment and medium
CN112084892B (en) Road abnormal event detection management device and method thereof
CN110516559B (en) Target tracking method and device suitable for accurate monitoring and computer equipment
CN113807125A (en) Emergency lane occupation detection method and device, computer equipment and storage medium
CN116311015A (en) Road scene recognition method, device, server, storage medium and program product
CN114973326A (en) Fall early warning method, device, equipment and readable storage medium
CN114882709A (en) Vehicle congestion detection method and device and computer storage medium
CN114170498A (en) Detection method and device for spilled objects, computer equipment and storage medium
CN114219073A (en) Method and device for determining attribute information, storage medium and electronic device
CN113673362A (en) Method and device for determining motion state of object, computer equipment and storage medium
CN113221800A (en) Monitoring and judging method and system for target to be detected
CN114093155A (en) Traffic accident responsibility tracing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination