CN115762153A - Method and device for detecting backing up - Google Patents

Method and device for detecting backing up Download PDF

Info

Publication number
CN115762153A
CN115762153A CN202211415757.8A CN202211415757A CN115762153A CN 115762153 A CN115762153 A CN 115762153A CN 202211415757 A CN202211415757 A CN 202211415757A CN 115762153 A CN115762153 A CN 115762153A
Authority
CN
China
Prior art keywords
vehicle
detection frame
lane
driving direction
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211415757.8A
Other languages
Chinese (zh)
Inventor
陈明轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202211415757.8A priority Critical patent/CN115762153A/en
Publication of CN115762153A publication Critical patent/CN115762153A/en
Priority to PCT/CN2023/121892 priority patent/WO2024098992A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a reversing detection method and device, and belongs to the technical field of image processing. The reversing detection method comprises the following steps: acquiring a road monitoring image, and identifying a vehicle detection frame in the road monitoring image; for each vehicle detection frame, determining the direction of a vehicle according to the vehicle detection frame, and judging whether the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located; when the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, judging whether the vehicle backs up or not according to the driving direction of the vehicle, and when the orientation of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located, determining that the vehicle runs reversely. The method and the device can improve the identification accuracy rate of the backing behavior.

Description

Method and device for detecting backing up
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a car backing.
Background
With the rapid increase of the holding quantity of motor vehicles and the weak normative driving consciousness of part of drivers, the behaviors of vehicles backing against rules and regulations occur on expressways, the behaviors greatly reduce the traffic capacity of roads, easily cause traffic accidents and generate personnel and property losses.
At present, field supervision can be carried out through field police force, assistant management personnel and the like, and reversing behaviors can be identified through monitoring videos of roads, but the mode consumes a large amount of human resources; with the development of computer vision, the computer vision technology is gradually and rapidly developed in the field of traffic monitoring, and the reversing behavior is identified and detected through the computer vision technology, so that police resources can be saved, but the accuracy of identifying the reversing behavior by the existing computer vision technology is low.
Disclosure of Invention
The embodiment of the disclosure provides a reversing detection method and a reversing detection device, which can improve the accuracy of reversing behavior identification.
The embodiment of the disclosure provides the following technical scheme:
in one aspect, a reverse detection method is provided, including:
acquiring a road monitoring image, and identifying a vehicle detection frame in the road monitoring image;
for each vehicle detection frame, determining the direction of a vehicle according to the vehicle detection frame, and judging whether the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located;
when the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, judging whether the vehicle backs up or not according to the driving direction of the vehicle, and when the orientation of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located, determining that the vehicle runs reversely.
In some embodiments, the determining whether the orientation of the vehicle is consistent with the driving direction of the lane in which the vehicle is located includes:
determining a lane where a center point of the vehicle detection frame is located;
and acquiring the driving direction of the lane, and comparing the driving direction of the lane with the orientation of the vehicle.
In some embodiments, said determining an orientation of a vehicle from said vehicle detection block comprises:
and inputting the vehicle detection frame into a pre-trained vehicle attribute model, wherein the result output by the vehicle attribute model is one of the head and the tail close to the camera shooting position.
In some embodiments, the method further comprises the step of training the vehicle property model, the step of training the vehicle property model comprising:
establishing an initial vehicle attribute model;
and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
In some embodiments, the determining whether the vehicle is backing up according to the driving direction of the vehicle includes:
allocating a corresponding tracking identifier for each vehicle;
aiming at the tracking identifier corresponding to each vehicle, acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images, and taking preset position points of the vehicle detection frame as track points to obtain a plurality of track points;
connecting the plurality of track points to obtain the track of the vehicle;
determining that the vehicle is in a reversing state to be determined when an included angle between the track of the vehicle and the driving direction of a lane where the vehicle is located is larger than or equal to a preset first threshold value;
when the vehicle is determined to be in a reversing state to be determined for at least M times continuously, and the length of the track of the vehicle is larger than a preset second threshold value, the vehicle is determined to be reversed;
wherein M and N are integers more than 1.
In some embodiments, the method further comprises:
when the length of the track of the vehicle is smaller than or equal to a preset second threshold value, determining that the vehicle is in a parking state to be determined;
determining that the vehicle is parked when the vehicle is determined to be in the parking waiting state at least K times in succession;
wherein K is an integer greater than 1.
In some embodiments, N is greater than or equal to 20.
In some embodiments, acquiring a vehicle detection frame corresponding to the tracking identifier in each frame of image includes:
identifying a vehicle detection frame in the current frame road monitoring image;
and matching the vehicle detection frame with a vehicle detection frame corresponding to the existing tracking identifier, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, establishing the corresponding relation between the vehicle detection frame in the current road monitoring image and the existing tracking identifier.
The embodiment of the present disclosure further provides a reverse detection device, including:
the identification module is used for acquiring a road monitoring image and identifying a vehicle detection frame in the road monitoring image;
the first processing module is used for determining the direction of a vehicle according to the vehicle detection frames for each vehicle detection frame and judging whether the direction of the vehicle is consistent with the driving direction of a lane where the vehicle is located;
and the second processing module is used for judging whether the vehicle backs up or not according to the driving direction of the vehicle when the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, and determining that the vehicle reversely drives when the direction of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located.
In some embodiments, the first processing module comprises:
the determining unit is used for determining a lane where the center point of the vehicle detection frame is located;
and the acquisition unit is used for acquiring the driving direction of the lane and comparing the driving direction of the lane with the direction of the vehicle.
In some embodiments, the first processing module further comprises:
and the orientation processing unit is used for inputting the vehicle detection frame into a pre-trained vehicle attribute model and inputting the vehicle detection frame into the pre-trained vehicle attribute model, and the output result of the vehicle attribute model is one of the head and the tail close to the camera shooting position.
In some embodiments, the apparatus further comprises:
the training module is used for establishing an initial vehicle attribute model; and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
In some embodiments, the second processing module comprises:
the distribution unit is used for distributing a corresponding tracking identifier for each vehicle;
the track acquisition unit is used for acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images aiming at the tracking identifier corresponding to each vehicle, and taking the preset position points of the vehicle detection frame as track points to obtain a plurality of track points; connecting the plurality of track points to obtain the track of the vehicle;
the judging unit is used for determining that the vehicle is in a reversing state to be judged when an included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is larger than or equal to a preset first threshold value;
the reversing judgment unit is used for determining that the vehicle reverses when the vehicle is determined to be in a reversing state to be determined for at least M times continuously and the length of the track of the vehicle is greater than a preset second threshold value;
wherein M and N are integers more than 1.
In some embodiments, the second processing module further comprises:
the parking judgment unit is used for determining that the vehicle is in a parking state to be judged when the length of the track of the vehicle is smaller than or equal to a preset second threshold value; determining that the vehicle is parked when the vehicle is determined to be in the parking waiting state at least K times in succession;
wherein K is an integer greater than 1.
In some embodiments, the apparatus further comprises:
the matching module is used for identifying a vehicle detection frame in the current road monitoring image; and matching the vehicle detection frame with a vehicle detection frame corresponding to the existing tracking identifier, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, establishing the corresponding relation between the vehicle detection frame in the current road monitoring image and the existing tracking identifier.
Embodiments of the present disclosure also provide a readable storage medium on which a program or instructions are stored, which when executed by a processor implement the steps of the reversing detection method as described above.
The embodiment of the disclosure has the following beneficial effects:
in the scheme, the orientation of the vehicle is determined according to the vehicle detection frame, whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located is judged at first, and whether the vehicle backs the vehicle is judged according to the driving direction of the vehicle when the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, so that two behaviors of backing the vehicle and reversely driving the vehicle can be avoided being confused, and the identification accuracy of the backing behavior can be improved.
Drawings
Fig. 1 is a schematic flow chart diagram of a reverse detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a vehicle detection frame in a road surveillance image identified according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a reverse detection device according to an embodiment of the disclosure.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved by the embodiments of the present disclosure clearer, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The embodiment of the disclosure provides a reversing detection method and a reversing detection device, which can improve the accuracy of reversing behavior identification.
An embodiment of the present disclosure provides a reverse detection method, as shown in fig. 1, including:
step 101: acquiring a road monitoring image, and identifying a vehicle detection frame in the road monitoring image;
specifically, the road condition may be photographed by a roadside camera to obtain a road monitoring image as shown in fig. 2, where the road monitoring image includes, but is not limited to, a monitoring image of an expressway, a monitoring image of a road in a city, and the like.
After the road monitoring image is obtained, the lanes in the road monitoring image may be calibrated by using a polygonal frame, as shown in fig. 2, the lanes S1 and S2 in the road monitoring image are calibrated, and after the lanes S1 and S2 in the road monitoring image are calibrated, the driving directions of the lanes S1 and S2 may be determined, specifically, the driving direction of the lane may be determined according to the positions of the lanes, for example, in fig. 2, the lane S2 is located on the right side of the lane S1, since the vehicle is driving on the right, the driving direction of the lane S2 may be determined to be far from the camera position, and the driving direction of the lane S1 may be determined to be close to the camera position if the lane S1 is located on the left side of the lane S2; alternatively, in some road monitoring images, an arrow indicating the driving direction of the lane is marked on the lane, and the driving direction of the lane can be determined directly according to the direction indicated by the arrow.
In the present example, the driving direction of the lane S1 is opposite to the driving direction of the lane S2, and the driving direction of the lane S1 may be recorded as 1, and the driving direction of the lane S2 may be recorded as-1, where 1 represents a forward direction and-1 represents a reverse direction.
After the road monitoring image is obtained, the vehicle detection frames in the road monitoring image can be identified, the positions of the vehicles can be calibrated by the vehicle detection frames, and each vehicle detection frame uniquely corresponds to one vehicle. Specifically, a vehicle detection model may be trained in advance, and vehicle detection frames in the road monitoring image are identified by using the vehicle detection model, the vehicle detection model is input as the road monitoring image, and output as the vehicle detection frames, and in a specific example, a plurality of vehicle detection frames S3 are identified in the road monitoring image as shown in fig. 2.
When the vehicle detection model is trained, an initial vehicle detection model can be established, a plurality of groups of training data are used for training the vehicle detection model, each group of training data comprises a road monitoring image and a vehicle detection frame in the road monitoring image, and the vehicle detection model can be obtained after the vehicle detection model is trained by the plurality of groups of training data.
Step 102: for each vehicle detection frame, determining the direction of a vehicle according to the vehicle detection frame, and judging whether the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located;
specifically, the orientation of the vehicle may be determined by using a pre-trained vehicle attribute model, a vehicle image may be cut out from the road monitoring image according to the vehicle detection frame, the vehicle image may be input into the pre-trained vehicle attribute model, and the head or tail of the vehicle may be output, where the output result of the vehicle attribute model is one of the head and the tail close to the camera position.
If the output of the vehicle attribute model is the head of the vehicle, the head of the vehicle is closer to the camera shooting position, and the tail of the vehicle is far away from the camera shooting position, the direction of the vehicle can be determined; if the vehicle attribute model output is the vehicle tail, it indicates that the vehicle tail is closer to the camera position, and the vehicle head is far away from the camera position, then the vehicle direction can be determined.
Specifically, when the vehicle attribute model is trained, an initial vehicle attribute model is established; and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
When judging whether the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, firstly determining the lane where the central point of the vehicle detection frame is located, then acquiring the driving direction of the lane, and comparing the driving direction of the lane with the direction of the vehicle.
For example, if the output of the vehicle attribute model is the tail of a vehicle, the direction of the vehicle can be determined to be the direction away from the camera position, the direction of the vehicle can be determined accordingly, and the driving direction of the lane and the direction of the vehicle can be determined to be consistent or not by combining the driving direction of the lane in which the vehicle is located. And if the output of the vehicle attribute model is the head, determining that the direction of the vehicle is close to the camera position, determining the direction of the vehicle according to the direction, and determining whether the driving direction of the lane is consistent with the direction of the vehicle by combining the driving direction of the lane in which the vehicle is located.
In a specific example, the direction of the vehicle is a departure shooting position, and the driving direction of a lane in which the vehicle is located is a departure shooting direction, so that the driving direction of the lane is consistent with the direction of the vehicle; the direction of the vehicle is close to the camera shooting position, the driving direction of the lane where the vehicle is located is a direction away from the camera shooting position, and the driving direction of the lane is inconsistent with the direction of the vehicle; the direction of the vehicle is a departure camera shooting position, the driving direction of a lane where the vehicle is located is a approaching camera shooting direction, and the driving direction of the lane is inconsistent with the direction of the vehicle; the direction of the vehicle is close to the camera shooting position, the driving direction of the lane where the vehicle is located is close to the camera shooting direction, and then the driving direction of the lane is consistent with the direction of the vehicle.
Step 103: when the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, judging whether the vehicle backs up or not according to the driving direction of the vehicle, and when the direction of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located, determining that the vehicle reversely runs.
When the driving direction of the lane is judged to be inconsistent with the orientation of the vehicle, the vehicle can be directly determined to back up without further judgment, and alarm information about the vehicle can be sent to traffic management personnel. Specifically, a warning list may be established, the warning list including vehicle information of the reversing vehicle, and the warning list may be sent to the traffic management staff in real time or periodically. After the driving direction of the lane is judged to be inconsistent with the direction of the vehicle, whether the vehicle information of the vehicle is located in the warning list or not is judged, and if the vehicle information of the vehicle is not located in the warning list, the vehicle information of the vehicle is added into the warning list.
Specifically, when judging whether the vehicle is reversed or not according to the driving direction of the vehicle, allocating a corresponding tracking identifier for each vehicle; aiming at the tracking identifier corresponding to each vehicle, acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images, and taking preset position points of the vehicle detection frame as track points to obtain a plurality of track points; connecting the plurality of track points to obtain the track of the vehicle; determining that the vehicle is in a reversing state to be determined when an included angle between the track of the vehicle and the driving direction of a lane where the vehicle is located is larger than or equal to a preset first threshold value; when the vehicle is determined to be in a reversing state to be determined for at least M times continuously, and the length of the track of the vehicle is larger than a preset second threshold value, the vehicle is determined to be reversed; wherein M and N are integers more than 1.
In this embodiment, a vehicle detection frame may be tracked by using a target tracking algorithm, such as a sort algorithm, specifically, a kalman filter algorithm is used to predict a position of a vehicle, a previous state value (for example, a position of the vehicle detection frame in a previous frame image) and a current state measurement value (for example, a position of the vehicle detection frame identified in a current frame image) are used to predict an estimated value of a next state (for example, a position of the vehicle detection frame in a next frame image) to realize the prejudgment of the vehicle position, and the prejudgment result is matched with a target detection result at a next time (for example, an actual position of the vehicle detection frame identified in the next frame image) through a hungary algorithm, so that the vehicle detection frame predicted according to the previous frame image tracking is associated with the vehicle detection frame detected in the next frame image, so that the vehicle detection frame detected in the next frame image represents a successfully tracked vehicle detection frame, the tracking of the vehicle is completed, and the vehicle detection frames in different frames are associated. For example, for the vehicle detection frame A, the position of the vehicle detection frame A in the k +2 frame image is predicted by using the position of the vehicle detection frame A in the k frame image and the position of the vehicle detection frame A in the k +1 frame image, and a prediction result is obtained; and identifying the position of the vehicle detection frame B in the k +2 frame image, performing Hungarian algorithm matching on the prediction result and the position of the vehicle detection frame B, associating the vehicle detection frame B with the vehicle detection frame A if the matching degree between the prediction result and the position of the vehicle detection frame B is greater than a set matching degree threshold value, and considering that the vehicle detection frame B and the vehicle detection frame A are vehicle detection frames corresponding to the same vehicle, thereby completing the tracking of the vehicle in different frame images.
In this embodiment, in order to improve the accuracy of the estimation of the vehicle motion state, N may be greater than or equal to 20, for example, 20, 25, 30, 35, 40, and the like.
Taking N as 25 as an example, a corresponding tracking identifier is allocated to each vehicle, and the corresponding relation between the tracking identifier and the vehicle detection frame identified in the road monitoring image can be realized through a target tracking algorithm. For example, a tracking identifier D is allocated to the vehicle C, at a first time, 25 vehicle detection frames corresponding to the tracking identifier D are respectively identified from a 1 st frame image to a 25 th frame image (the latest 25 frame images), a preset position point of each vehicle detection frame is respectively determined, the preset position point of each vehicle detection frame is used as a track point to obtain 25 track points, the 25 track points are connected to obtain a vehicle track at the first time, and when an included angle between the track of the vehicle at the first time and a driving direction of a lane where the vehicle is located is greater than or equal to a preset first threshold value, a preset reversing judgment counter value is incremented by 1; at a second moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 2 nd frame image to the 26 th frame image (the latest 25 frame image), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the second moment, and adding 1 to a preset reversing judgment counter when an included angle between the track of the vehicle at the second moment and the driving direction of the lane in which the vehicle is located is greater than or equal to a preset first threshold value; at a third moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 3 rd frame image to the 27 th frame image (the latest 25 frames of images), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the third moment, and adding 1 to a preset reversing judgment counter when an included angle between the track of the vehicle at the third moment and the driving direction of the lane in which the vehicle is located is greater than or equal to a preset first threshold value; and so on in turn; and when the value of the preset reversing judgment counter is larger than or equal to M, if the length of the track of the vehicle is larger than a preset second threshold value, the vehicle is judged to be reversed.
In this embodiment, the preset position point may be a central point of the vehicle detection frame, and certainly, the preset position point is not limited to the central point of the vehicle detection frame, and may also be another position point of the vehicle detection frame, for example, a vertex of the vehicle detection frame.
The length of the track of the vehicle is judged to avoid that the condition that the vehicle stays in the original place and does not move is judged to be the vehicle backing.
When the vehicle backs up and the track of the vehicle is parallel to the driving direction of the lane where the vehicle is located, the included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is 180 degrees, and in order to improve the accuracy of vehicle motion state evaluation, it is not necessarily required that the vehicle is determined to be in a state of backing up to be determined when the included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is 180 degrees; when an included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is greater than or equal to a preset first threshold, it may be determined that the vehicle is in a reverse state to be determined, where the first threshold may be greater than or equal to 85 °, for example, 120 °, 121 °, 122 °, 123 °, 124 °, 125 °, and the like. The second threshold may be represented by pixels of the image, for example, greater than or equal to 100 pixels, and may be, for example, 110 pixels, 120 pixels, 130 pixels, 140 pixels, 150 pixels, 160 pixels, 170 pixels, 180 pixels, or the like.
After determining that the vehicle is reversing, warning information about the vehicle may be sent to traffic management personnel. Specifically, a warning list may be established, the warning list including vehicle information of the reversing vehicle, and the warning list may be transmitted to the traffic management personnel in real time or periodically. After the driving direction of the lane is judged to be inconsistent with the direction of the vehicle, whether the vehicle information of the vehicle is located in the warning list or not is judged, and if the vehicle information of the vehicle is not located in the warning list, the vehicle information of the vehicle is added into the warning list.
In the tracking process, due to factors such as a shelter, disappearance of some tracking marks can occur, new tracking marks can appear after the shelter disappears, jumping of the tracking marks can cause discontinuity of vehicle track points, the motion direction of a vehicle cannot be accurately judged, and reconnection of the vehicle track points needs to be achieved when vehicle tracking fails and the tracking marks jump.
The newly added tracking identification and the simultaneously disappeared tracking identification need to be matched, and specifically, a vehicle detection frame in the current frame road monitoring image can be identified; and matching the vehicle detection frame with a vehicle detection frame corresponding to an existing tracking identifier, if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, namely the newly added tracking identifier and the simultaneously disappeared tracking identifier can be matched, replacing the newly added tracking identifier with the existing tracking identifier, and establishing a corresponding relation between the vehicle detection frame in the current frame road monitoring image and the existing tracking identifier. Therefore, the problem that the vehicle track disappears when the tracking identification jumps can be solved, and the accurate judgment of the vehicle movement direction is facilitated.
The embodiment may also determine whether the vehicle is in a parking state, specifically, when the length of the trajectory of the vehicle is less than or equal to a preset second threshold, determine that the vehicle is in a parking state to be determined; determining that the vehicle is parked when the vehicle is determined to be in the parking waiting state at least K times in succession; wherein K is an integer greater than 1.
Taking N as 25 as an example, allocating a tracking identifier D to a vehicle C, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 1 st frame image to the 25 th frame image (the latest 25 frames of images) at a first moment, respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the first moment, and adding 1 to a preset parking judgment counter when the length of the vehicle track at the first moment is smaller than a preset second threshold value; at a second moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 2 nd frame image to the 26 th frame image (the latest 25 frames of images), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the second moment, and adding 1 to a preset parking judgment counter when the length of the vehicle track at the second moment is smaller than a preset second threshold value; at a third moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 3 rd frame image to the 27 th frame image (the latest 25 frames of images), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the third moment, and adding 1 to a preset parking judgment counter when the length of the vehicle track at the third moment is smaller than a preset second threshold value; and so on; and when the value of the preset parking judgment counter is greater than or equal to K, judging that the vehicle is parked, and updating the initial parking time of the vehicle to the current time after judging that the vehicle is parked.
In the embodiment, the orientation of the vehicle is determined according to the vehicle detection frame, whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located is judged at first, and whether the vehicle backs the vehicle is judged according to the driving direction of the vehicle when the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, so that two behaviors of backing the vehicle and reversely driving the vehicle can be avoided from being confused, and the identification accuracy of the backing behavior can be improved.
An embodiment of the present disclosure further provides a reverse detection device, as shown in fig. 3, including:
the identification module 21 is configured to acquire a road monitoring image and identify a vehicle detection frame in the road monitoring image;
specifically, the road condition may be captured by a roadside camera to obtain a road monitoring image as shown in fig. 2, where the road monitoring image includes, but is not limited to, a monitoring image of an expressway, a monitoring image of a road in a city, and the like.
After the road monitoring image is obtained, the lanes in the road monitoring image may be marked by using a polygonal frame, as shown in fig. 2, the lanes S1 and S2 in the road monitoring image are marked, and after the lanes S1 and S2 in the road monitoring image are marked, the driving directions of the lanes S1 and S2 may be determined, in this example, the driving direction of the lane S1 is opposite to the driving direction of the lane S2, the driving direction of the lane S1 may be recorded as 1, and the driving direction of the lane S2 may be recorded as-1, where 1 represents a forward direction and-1 represents a reverse direction.
After the road monitoring image is obtained, the vehicle detection frames in the road monitoring image can be identified, the positions of the vehicles can be calibrated by the vehicle detection frames, and each vehicle detection frame uniquely corresponds to one vehicle. Specifically, a vehicle detection model may be trained in advance, and vehicle detection frames in the road monitoring image are identified by using the vehicle detection model, the vehicle detection model is input as the road monitoring image, and output as the vehicle detection frames, and in a specific example, a plurality of vehicle detection frames S3 are identified in the road monitoring image as shown in fig. 2.
When the vehicle detection model is trained, an initial vehicle detection model can be established, a plurality of groups of training data are used for training the vehicle detection model, each group of training data comprises a road monitoring image and a vehicle detection frame in the road monitoring image, and the vehicle detection model can be obtained after the vehicle detection model is trained by the plurality of groups of training data.
The first processing module 22 is configured to determine, for each vehicle detection frame, a vehicle direction according to the vehicle detection frame, and determine whether the vehicle direction is consistent with a driving direction of a lane where the vehicle is located;
specifically, the orientation of the vehicle may be determined by using a pre-trained vehicle attribute model, a vehicle image may be cut out from the road monitoring image according to the vehicle detection frame, the vehicle image may be input into the pre-trained vehicle attribute model, and the head or the tail of the vehicle may be output, where the result output by the vehicle attribute model is one of the head and the tail close to the camera position.
If the vehicle attribute model output is the head of the vehicle, the head of the vehicle is closer to the camera position, and the tail of the vehicle is far away from the camera position, so that the direction of the vehicle can be determined; if the vehicle attribute model output is the vehicle tail, it indicates that the vehicle tail is closer to the camera position, and the vehicle head is far away from the camera position, then the vehicle direction can be determined.
Specifically, when the vehicle attribute model is trained, an initial vehicle attribute model is established; and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
When judging whether the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, firstly determining the lane where the preset position point of the vehicle detection frame is located, then obtaining the driving direction of the lane, and comparing the driving direction of the lane with the direction of the vehicle.
For example, if the output of the vehicle attribute model is the vehicle tail, the direction of the vehicle can be determined as the departure shooting position, the direction of the vehicle can be determined accordingly, and the driving direction of the lane and the direction of the vehicle can be determined whether to be consistent or not by combining the driving direction of the lane in which the vehicle is located. And if the output of the vehicle attribute model is the head, determining that the direction of the vehicle is close to the camera position, determining the direction of the vehicle according to the direction, and determining whether the driving direction of the lane is consistent with the direction of the vehicle by combining the driving direction of the lane in which the vehicle is located.
The second processing module 23 is configured to, when the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, determine whether the vehicle backs up according to the driving direction of the vehicle, and when the direction of the vehicle is not consistent with the driving direction of the lane where the vehicle is located, determine that the vehicle is driving in the reverse direction.
When the driving direction of the lane is judged to be inconsistent with the orientation of the vehicle, the vehicle can be directly determined to back up without the need of next judgment, and alarm information about the vehicle can be sent to traffic management personnel. Specifically, a warning list may be established, the warning list including vehicle information of the reversing vehicle, and the warning list may be transmitted to the traffic management personnel in real time or periodically. After the driving direction of the lane is judged to be inconsistent with the direction of the vehicle, whether the vehicle information of the vehicle is located in the warning list or not is judged, and if the vehicle information of the vehicle is not located in the warning list, the vehicle information of the vehicle is added into the warning list.
Specifically, when judging whether the vehicle is reversed or not according to the driving direction of the vehicle, allocating a corresponding tracking identifier for each vehicle; aiming at the tracking identifier corresponding to each vehicle, acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images, and taking preset position points of the vehicle detection frame as track points to obtain a plurality of track points; connecting the plurality of track points to obtain the track of the vehicle; determining that the vehicle is in a reversing state to be determined when an included angle between the track of the vehicle and the driving direction of a lane where the vehicle is located is larger than or equal to a preset first threshold value; when the vehicle is determined to be in a reversing state to be determined for at least M times continuously, and the length of the track of the vehicle is larger than a preset second threshold value, the vehicle is determined to be reversed; wherein M and N are integers more than 1.
In this embodiment, a vehicle detection frame may be tracked by using a target tracking algorithm, such as a sort algorithm, specifically, a kalman filtering algorithm is used to predict a position of a vehicle, a previous state value (for example, a position of the vehicle detection frame in a previous frame image) and a current state measurement value (for example, a position of the vehicle detection frame identified in a current frame image) are used to predict an estimated value of a next state (for example, a position of the vehicle detection frame in a next frame image), thereby implementing a prejudgment on a position of the vehicle, and performing hungary algorithm matching between a prejudgment result and a target detection result at a next time (for example, an actual position of the vehicle detection frame identified in the next frame image), and implementing association between the vehicle detection frame predicted by tracking the previous frame image and the vehicle detection frame detected in the next frame image according to a matching result, so that the vehicle detection frame detected in the next frame image represents a successfully tracked vehicle detection frame, thereby completing tracking of the vehicle and associating the vehicle detection frames in different frames. For example, for the vehicle detection frame A, the position of the vehicle detection frame A in the k +2 frame image is predicted by using the position of the vehicle detection frame A in the k frame image and the position of the vehicle detection frame A in the k +1 frame image, and a prediction result is obtained; and identifying the position of the vehicle detection frame B in the k +2 frame image, performing Hungarian algorithm matching on the prediction result and the position of the vehicle detection frame B, associating the vehicle detection frame B with the vehicle detection frame A if the matching degree between the prediction result and the position of the vehicle detection frame B is greater than a set matching degree threshold value, and considering that the vehicle detection frame B and the vehicle detection frame A are vehicle detection frames corresponding to the same vehicle, thereby completing the tracking of the vehicle in different frame images.
In this embodiment, in order to improve the accuracy of the estimation of the vehicle motion state, N may be greater than or equal to 20, for example, 20, 25, 30, 35, 40, and the like.
Taking N as 25 as an example, a corresponding tracking identifier is allocated to each vehicle, and the corresponding relation between the tracking identifier and the vehicle detection frame identified in the road monitoring image can be realized through a target tracking algorithm. For example, a tracking identifier D is allocated to a vehicle C, at a first moment, 25 vehicle detection frames corresponding to the tracking identifier D are respectively identified from a 1 st frame image to a 25 th frame image (a latest 25 frame image), a preset position point of each vehicle detection frame is respectively determined, the preset position point of each vehicle detection frame is taken as a track point to obtain 25 track points, the 25 track points are connected to obtain a vehicle track at the first moment, and when an included angle between the track of the vehicle at the first moment and the driving direction of a lane where the vehicle is located is greater than or equal to a preset first threshold value, a preset reversing judgment counter value is increased by 1; at a second moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 2 nd frame image to the 26 th frame image (the latest 25 frame image), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the second moment, and adding 1 to a preset reversing judgment counter when an included angle between the track of the vehicle at the second moment and the driving direction of the lane in which the vehicle is located is greater than or equal to a preset first threshold value; at a third moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 3 rd frame image to the 27 th frame image (the latest 25 frames of images), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the third moment, and adding 1 to a preset reversing judgment counter when an included angle between the track of the vehicle at the third moment and the driving direction of the lane in which the vehicle is located is greater than or equal to a preset first threshold value; and so on; and when the value of a preset reversing judgment counter is greater than or equal to M, if the length of the track of the vehicle is greater than a preset second threshold value, the vehicle is judged to be reversed.
The length of the track of the vehicle is judged to avoid that the condition that the vehicle stays in the original place and does not move is judged to be the vehicle backing.
When the vehicle backs up and the track of the vehicle is parallel to the driving direction of the lane where the vehicle is located, the included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is 180 degrees, and in order to improve the accuracy of vehicle motion state evaluation, it is not necessarily required that the vehicle is determined to be in a state of backing up to be determined when the included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is 180 degrees; when an included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is greater than or equal to a preset first threshold value, it may be determined that the vehicle is in a reverse state to be determined, where the first threshold value may be greater than or equal to 85 °, for example, 120 °, 121 °, 122 °, 123 °, 124 °, 125 °, and the like. The second threshold may be represented by pixels of the image, for example, greater than or equal to 100 pixels, and may be, for example, 110 pixels, 120 pixels, 130 pixels, 140 pixels, 150 pixels, 160 pixels, 170 pixels, 180 pixels, or the like.
After determining that the vehicle is reversing, warning information about the vehicle may be sent to traffic management personnel. Specifically, a warning list may be established, the warning list including vehicle information of the reversing vehicle, and the warning list may be transmitted to the traffic management personnel in real time or periodically. After the driving direction of the lane is judged to be inconsistent with the direction of the vehicle, whether the vehicle information of the vehicle is located in the warning list or not is judged, and if the vehicle information of the vehicle is not located in the warning list, the vehicle information of the vehicle is added into the warning list.
In the tracking process, due to factors such as a shelter, disappearance of some tracking marks can occur, new tracking marks can appear after the shelter disappears, jumping of the tracking marks can cause discontinuity of vehicle track points, the motion direction of a vehicle cannot be accurately judged, and reconnection of the vehicle track points needs to be achieved when vehicle tracking fails and the tracking marks jump.
The newly added tracking identification and the simultaneously disappeared tracking identification need to be matched, and specifically, a vehicle detection frame in the current frame road monitoring image can be identified; and matching the vehicle detection frame with a vehicle detection frame corresponding to an existing tracking identifier, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, namely the newly added tracking identifier and the simultaneously disappeared tracking identifier can be matched, replacing the newly added tracking identifier with the existing tracking identifier, and establishing a corresponding relation between the vehicle detection frame in the current frame road monitoring image and the existing tracking identifier. Therefore, the problem that the vehicle track disappears when the tracking identification jumps can be solved, and the accurate judgment of the vehicle movement direction is facilitated.
The embodiment may also determine whether the vehicle is in a parking state, specifically, when the length of the trajectory of the vehicle is less than or equal to a preset second threshold, determine that the vehicle is in a parking state to be determined; determining that the vehicle is parked when the vehicle is determined to be in a parking waiting state for at least K consecutive times; wherein K is an integer greater than 1.
Taking N as 25 as an example, allocating a tracking identifier D to a vehicle C, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 1 st frame image to the 25 th frame image (the latest 25 frames of images) at a first moment, respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the first moment, and adding 1 to a preset parking judgment counter when the length of the vehicle track at the first moment is smaller than a preset second threshold value; at a second moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 2 nd frame image to the 26 th frame image (the latest 25 frame image), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the second moment, and adding 1 to a preset parking judgment counter when the length of the vehicle track at the second moment is smaller than a preset second threshold value; at a third moment, respectively identifying 25 vehicle detection frames corresponding to the tracking identifier D from the 3 rd frame image to the 27 th frame image (the latest 25 frames of images), respectively determining a preset position point of each vehicle detection frame, taking the preset position point of each vehicle detection frame as a track point to obtain 25 track points, connecting the 25 track points to obtain a vehicle track at the third moment, and adding 1 to a preset parking judgment counter when the length of the vehicle track at the third moment is smaller than a preset second threshold value; and so on; and when the value of the preset parking judgment counter is greater than or equal to K, judging that the vehicle parks, and updating the initial parking time of the vehicle to the current time after judging that the vehicle parks.
In the embodiment, the orientation of the vehicle is determined according to the vehicle detection frame, whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located is judged at first, and whether the vehicle backs up is judged according to the driving direction of the vehicle when the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, so that two behaviors of backing up and reversely driving the vehicle can be avoided being confused, and the identification accuracy of the backing up behavior can be improved.
In some embodiments, the first processing module 22 comprises:
the determining unit is used for determining a lane where the center point of the vehicle detection frame is located;
and the acquisition unit is used for acquiring the driving direction of the lane and comparing the driving direction of the lane with the direction of the vehicle.
In some embodiments, the first processing module 22 further comprises:
and the orientation processing unit is used for inputting the vehicle detection frame into a pre-trained vehicle attribute model and inputting the vehicle detection frame into the pre-trained vehicle attribute model, and the output result of the vehicle attribute model is one of the head and the tail close to the camera shooting position.
In some embodiments, the apparatus further comprises:
the training module is used for establishing an initial vehicle attribute model; and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
In some embodiments, the second processing module 23 comprises:
the distribution unit is used for distributing a corresponding tracking identifier for each vehicle;
the track acquisition unit is used for acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images aiming at the tracking identifier corresponding to each vehicle, and taking the preset position points of the vehicle detection frame as track points to obtain a plurality of track points; connecting the plurality of track points to obtain the track of the vehicle;
the judging unit is used for determining that the vehicle is in a reversing state to be judged when an included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is larger than or equal to a preset first threshold value;
the reversing judgment unit is used for determining that the vehicle reverses when the vehicle is determined to be in a reversing state to be determined for at least M times continuously and the length of the track of the vehicle is greater than a preset second threshold value;
wherein M and N are integers more than 1.
In some embodiments, the second processing module further comprises:
the parking judgment unit is used for determining that the vehicle is in a parking state to be judged when the length of the track of the vehicle is smaller than or equal to a preset second threshold value; determining that the vehicle is parked when the vehicle is determined to be in the parking waiting state at least K times in succession;
wherein K is an integer greater than 1.
In some embodiments, the apparatus further comprises:
the matching module is used for identifying a vehicle detection frame in the current road monitoring image; and matching the vehicle detection frame with a vehicle detection frame corresponding to the existing tracking identifier, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, establishing the corresponding relation between the vehicle detection frame in the current road monitoring image and the existing tracking identifier.
Embodiments of the present disclosure also provide a readable storage medium on which a program or instructions are stored, which when executed by a processor implement the steps of the reversing detection method as described above.
Wherein, the processor is the processor in the terminal described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions to realize the processes of the embodiment of the reversing detection method, the same technical effects can be achieved, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip or a system-on-chip, etc.
The embodiment of the present application further provides a computer program/program product, where the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement each process of the above-mentioned embodiment of the reverse detection method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (16)

1. A method of detecting reversing, comprising:
acquiring a road monitoring image, and identifying a vehicle detection frame in the road monitoring image;
for each vehicle detection frame, determining the direction of a vehicle according to the vehicle detection frame, and judging whether the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located;
when the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, judging whether the vehicle backs up or not according to the driving direction of the vehicle, and when the orientation of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located, determining that the vehicle runs reversely.
2. The reverse detection method according to claim 1, wherein the determining whether the orientation of the vehicle is consistent with the driving direction of the lane in which the vehicle is located comprises:
determining a lane where a center point of the vehicle detection frame is located;
and acquiring the driving direction of the lane, and comparing the driving direction of the lane with the orientation of the vehicle.
3. The method of claim 1, wherein determining the orientation of the vehicle according to the vehicle detection frame comprises:
and inputting the vehicle detection frame into a pre-trained vehicle attribute model, inputting the vehicle detection frame into the pre-trained vehicle attribute model, and outputting the head or tail of the vehicle, wherein the result output by the vehicle attribute model is one of the head and the tail close to the camera shooting position.
4. The reversing detection method according to claim 3, further comprising the step of training the vehicle property model, the step of training the vehicle property model comprising:
establishing an initial vehicle attribute model;
and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
5. The reverse detection method according to claim 1, wherein the determining whether the vehicle reverses according to the traveling direction of the vehicle includes:
allocating a corresponding tracking identifier for each vehicle;
aiming at the tracking identifier corresponding to each vehicle, acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images, and taking a preset position point of the vehicle detection frame as a track point to obtain a plurality of track points;
connecting the plurality of track points to obtain the track of the vehicle;
when an included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is larger than or equal to a preset first threshold value, determining that the vehicle is in a reversing state to be determined;
when the vehicle is determined to be in a reversing state to be determined for at least M times continuously, and the length of the track of the vehicle is larger than a preset second threshold value, the vehicle is determined to be reversed;
wherein M and N are integers more than 1.
6. The reversing detection method according to claim 5, further comprising:
when the length of the track of the vehicle is smaller than or equal to a preset second threshold value, determining that the vehicle is in a parking state to be determined;
determining that the vehicle is parked when the vehicle is determined to be in a parking waiting state for at least K consecutive times;
wherein K is an integer greater than 1.
7. The reversing detection method according to claim 5, wherein N is greater than or equal to 20.
8. The reversing detection method according to claim 5, wherein the step of acquiring the vehicle detection frame corresponding to the tracking identifier in each frame of image comprises the following steps:
identifying a vehicle detection frame in the current frame road monitoring image;
and matching the vehicle detection frame with a vehicle detection frame corresponding to the existing tracking identifier, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, establishing the corresponding relation between the vehicle detection frame in the current road monitoring image and the existing tracking identifier.
9. A reverse detection device, comprising:
the identification module is used for acquiring a road monitoring image and identifying a vehicle detection frame in the road monitoring image;
the first processing module is used for determining the direction of a vehicle according to the vehicle detection frames for each vehicle detection frame and judging whether the direction of the vehicle is consistent with the driving direction of a lane where the vehicle is located;
and the second processing module is used for judging whether the vehicle backs up or not according to the driving direction of the vehicle when the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, and determining that the vehicle reversely drives when the direction of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located.
10. The reverse detection device according to claim 9, wherein the first processing module comprises:
the determining unit is used for determining a lane where the center point of the vehicle detection frame is located;
and the acquisition unit is used for acquiring the driving direction of the lane and comparing the driving direction of the lane with the orientation of the vehicle.
11. The reverse detection device of claim 9, wherein the first processing module further comprises:
and the orientation processing unit is used for inputting the vehicle detection frame into a pre-trained vehicle attribute model and inputting the vehicle detection frame into the pre-trained vehicle attribute model, and the output result of the vehicle attribute model is one of the head and the tail close to the camera shooting position.
12. The reverse detection device of claim 11, further comprising:
the training module is used for establishing an initial vehicle attribute model; and inputting a plurality of groups of training data into the initial vehicle attribute model for training to obtain the vehicle attribute model, wherein each group of training data comprises a vehicle image and one of the head and the tail close to the camera shooting position.
13. The reversing detection device according to claim 9, wherein the second processing module comprises:
the distribution unit is used for distributing a corresponding tracking identifier for each vehicle;
the track acquisition unit is used for acquiring a vehicle detection frame corresponding to the tracking identifier in the latest N frames of road monitoring images according to the tracking identifier corresponding to each vehicle, and taking the preset position points of the vehicle detection frame as track points to obtain a plurality of track points; connecting the plurality of track points to obtain the track of the vehicle;
the judging unit is used for determining that the vehicle is in a reversing state to be judged when an included angle between the track of the vehicle and the driving direction of the lane where the vehicle is located is larger than or equal to a preset first threshold value;
the reversing judgment unit is used for determining that the vehicle reverses when the vehicle is determined to be in a reversing state to be determined for at least M times continuously and the length of the track of the vehicle is greater than a preset second threshold value;
wherein M and N are integers more than 1.
14. The reverse detection device of claim 13, wherein the second processing module further comprises:
the parking judgment unit is used for determining that the vehicle is in a parking state to be judged when the length of the track of the vehicle is smaller than or equal to a preset second threshold value; determining that the vehicle is parked when the vehicle is determined to be in a parking waiting state for at least K consecutive times;
wherein K is an integer greater than 1.
15. The reversing detection device according to claim 13, further comprising:
the matching module is used for identifying a vehicle detection frame in the current road monitoring image; and matching the vehicle detection frame with a vehicle detection frame corresponding to the existing tracking identifier, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking identifier is greater than a preset third threshold value, establishing the corresponding relation between the vehicle detection frame in the current road monitoring image and the existing tracking identifier.
16. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, carry out the steps of the reverse detection method according to any one of claims 1 to 8.
CN202211415757.8A 2022-11-11 2022-11-11 Method and device for detecting backing up Pending CN115762153A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211415757.8A CN115762153A (en) 2022-11-11 2022-11-11 Method and device for detecting backing up
PCT/CN2023/121892 WO2024098992A1 (en) 2022-11-11 2023-09-27 Vehicle reversing detection method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211415757.8A CN115762153A (en) 2022-11-11 2022-11-11 Method and device for detecting backing up

Publications (1)

Publication Number Publication Date
CN115762153A true CN115762153A (en) 2023-03-07

Family

ID=85370000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211415757.8A Pending CN115762153A (en) 2022-11-11 2022-11-11 Method and device for detecting backing up

Country Status (2)

Country Link
CN (1) CN115762153A (en)
WO (1) WO2024098992A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024098992A1 (en) * 2022-11-11 2024-05-16 京东方科技集团股份有限公司 Vehicle reversing detection method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112530170A (en) * 2020-12-16 2021-03-19 济南博观智能科技有限公司 Vehicle driving state detection method and device, electronic equipment and storage medium
CN113177509B (en) * 2021-05-19 2022-10-25 浙江大华技术股份有限公司 Method and device for recognizing backing behavior
CN113903008A (en) * 2021-10-26 2022-01-07 中远海运科技股份有限公司 Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
CN114049610B (en) * 2021-12-02 2024-06-14 公安部交通管理科学研究所 Active discovery method for motor vehicle reversing and reverse driving illegal behaviors on expressway
CN115762153A (en) * 2022-11-11 2023-03-07 京东方科技集团股份有限公司 Method and device for detecting backing up

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024098992A1 (en) * 2022-11-11 2024-05-16 京东方科技集团股份有限公司 Vehicle reversing detection method and apparatus

Also Published As

Publication number Publication date
WO2024098992A1 (en) 2024-05-16

Similar Documents

Publication Publication Date Title
Naphade et al. The 2018 nvidia ai city challenge
CN112069643B (en) Automatic driving simulation scene generation method and device
JP7499256B2 (en) System and method for classifying driver behavior - Patents.com
US11836985B2 (en) Identifying suspicious entities using autonomous vehicles
EP3751480B1 (en) System and method for detecting on-street parking violations
CN106611512B (en) Method, device and system for processing starting of front vehicle
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN108460968A (en) A kind of method and device obtaining traffic information based on car networking
CN110942038B (en) Traffic scene recognition method and device based on vision, medium and electronic equipment
CN111028529A (en) Vehicle-mounted device installed in vehicle, and related device and method
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
CN110032947B (en) Method and device for monitoring occurrence of event
CN111753634B (en) Traffic event detection method and equipment
CN108932850B (en) Method and device for recording low-speed driving illegal behaviors of motor vehicle
CN109101939A (en) Determination method, system, terminal and the readable storage medium storing program for executing of state of motion of vehicle
CN113012436A (en) Road monitoring method and device and electronic equipment
CN115393803A (en) Vehicle violation detection method, device and system and storage medium
WO2024098992A1 (en) Vehicle reversing detection method and apparatus
CN109887303B (en) Lane-changing behavior early warning system and method
CN109300313B (en) Illegal behavior detection method, camera and server
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN111967451B (en) Road congestion detection method and device
CN113989715A (en) Vehicle parking violation detection method and device, electronic equipment and storage medium
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN115394089A (en) Vehicle information fusion display method, sensorless passing system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination