WO2024098992A1 - 倒车检测方法及装置 - Google Patents

倒车检测方法及装置 Download PDF

Info

Publication number
WO2024098992A1
WO2024098992A1 PCT/CN2023/121892 CN2023121892W WO2024098992A1 WO 2024098992 A1 WO2024098992 A1 WO 2024098992A1 CN 2023121892 W CN2023121892 W CN 2023121892W WO 2024098992 A1 WO2024098992 A1 WO 2024098992A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
detection frame
reversing
lane
trajectory
Prior art date
Application number
PCT/CN2023/121892
Other languages
English (en)
French (fr)
Inventor
陈明轩
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2024098992A1 publication Critical patent/WO2024098992A1/zh

Links

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to a reverse detection method and device.
  • reversing behavior can be identified through on-site supervision by on-site police and auxiliary personnel, as well as through road surveillance videos, but this method consumes a lot of human resources; with the development of computer vision, computer vision technology has gradually developed rapidly in the field of traffic monitoring. Recognizing and detecting reversing behavior through computer vision technology can save police resources, but the accuracy of existing computer vision technology in identifying reversing behavior is low.
  • the embodiments of the present disclosure provide a method and device for detecting a reversing vehicle, which can improve the recognition accuracy of reversing behavior.
  • a reverse detection method comprising:
  • determining whether the orientation of the vehicle is consistent with the driving direction of the lane in which the vehicle is located includes:
  • the driving direction of the lane is obtained, and the driving direction of the lane is compared with the orientation of the vehicle.
  • determining the orientation of the vehicle according to the vehicle detection frame includes:
  • the vehicle detection frame is input into a pre-trained vehicle attribute model, and the result output by the vehicle attribute model is one of the front and the rear of the vehicle that is closer to the camera position.
  • the method further comprises the step of training the vehicle attribute model, wherein the step of training the vehicle attribute model comprises:
  • each set of the training data includes a vehicle image and one of the front and rear of the vehicle close to a camera position.
  • determining whether the vehicle is reversing according to the driving direction of the vehicle includes:
  • For each tracking mark corresponding to the vehicle obtain a vehicle detection frame corresponding to the tracking mark in the latest N frames of road monitoring images, and use a preset position point of the vehicle detection frame as a track point to obtain a plurality of track points;
  • M and N are integers greater than 1.
  • the method further comprises:
  • K is an integer greater than 1.
  • N is greater than or equal to 20.
  • obtaining a vehicle detection frame corresponding to the tracking mark in each frame of the image includes:
  • the vehicle detection frame is matched with a vehicle detection frame corresponding to an existing tracking mark. If the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking mark is greater than a preset third threshold, a corresponding relationship between the vehicle detection frame in the current frame road monitoring image and the existing tracking mark is established.
  • the embodiment of the present disclosure further provides a reverse detection device, comprising:
  • a recognition module used to acquire a road monitoring image and recognize a vehicle detection frame in the road monitoring image
  • a first processing module is used to determine the orientation of the vehicle according to each vehicle detection frame, and judge whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located;
  • the second processing module is used to determine whether the vehicle is reversing according to the driving direction of the vehicle when the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, and determine that the vehicle is traveling in the opposite direction when the direction of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located.
  • the first processing module includes:
  • a determination unit used to determine the lane where the center point of the vehicle detection frame is located
  • the acquisition unit is used to acquire the driving direction of the lane and compare the driving direction of the lane with the orientation of the vehicle.
  • the first processing module further includes:
  • a direction processing unit is used to input the vehicle detection frame into a pre-trained vehicle attribute model, and the vehicle detection frame is input into a pre-trained vehicle attribute model, and the result output by the vehicle attribute model is one of the front and the rear of the vehicle closer to the camera position.
  • the device further comprises:
  • the training module is used to establish an initial vehicle attribute model; multiple sets of training data are input into the initial vehicle attribute model for training to obtain the vehicle attribute model, each set of training data includes a vehicle image and one of the front and rear of the vehicle close to the camera position.
  • the second processing module includes:
  • An allocating unit used for allocating a corresponding tracking identification to each vehicle
  • a trajectory acquisition unit is used to acquire, for each tracking mark corresponding to the vehicle, a vehicle detection frame corresponding to the tracking mark in the latest N frames of road monitoring images, and use a preset position point of the vehicle detection frame as a trajectory point to obtain a plurality of trajectory points; and connect the plurality of trajectory points to obtain a trajectory of the vehicle;
  • a judgment unit configured to determine that the vehicle is in a reversing state to be determined when an angle between a trajectory of the vehicle and a driving direction of a lane in which the vehicle is located is greater than or equal to a preset first threshold
  • a reversing judgment unit configured to determine that the vehicle is reversing when the vehicle is determined to be in a reversing state to be determined for at least M consecutive times and when the length of the trajectory of the vehicle is greater than a preset second threshold;
  • M and N are integers greater than 1.
  • the second processing module further includes:
  • a parking determination unit configured to determine that the vehicle is in a parking state to be determined when the length of the trajectory of the vehicle is less than or equal to a preset second threshold value; and determine that the vehicle is parked when the vehicle is in a parking state to be determined for at least K consecutive times;
  • K is an integer greater than 1.
  • the device further comprises:
  • the matching module is used to identify a vehicle detection frame in a current frame road monitoring image; match the vehicle detection frame with a vehicle detection frame corresponding to an existing tracking mark, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking mark is greater than a preset third threshold, establish a corresponding relationship between the vehicle detection frame in the current frame road monitoring image and the existing tracking mark.
  • An embodiment of the present disclosure further provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored on which a program or instruction is stored.
  • the orientation of the vehicle is determined based on the vehicle detection frame. First, it is determined whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located. When the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, it is determined whether the vehicle is reversing based on the driving direction of the vehicle. This can avoid confusing the two behaviors of reversing and reverse driving, and can improve the recognition accuracy of reversing behavior.
  • FIG1 is a schematic diagram of a flow chart of a reverse vehicle detection method according to an embodiment of the present disclosure
  • FIG2 is a schematic diagram of identifying a vehicle detection frame in a road monitoring image according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a reverse vehicle detection device according to an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a method and device for detecting a reversing vehicle, which can improve the recognition accuracy of reversing behavior.
  • An embodiment of the present disclosure provides a reverse vehicle detection method, as shown in FIG1 , comprising:
  • Step 101 Acquire a road monitoring image and identify a vehicle detection frame in the road monitoring image
  • the road conditions may be photographed by roadside cameras to obtain a road monitoring image as shown in FIG. 2 .
  • the road monitoring image includes but is not limited to monitoring images of expressways, monitoring images of roads within a city, and the like.
  • the lanes in the road monitoring image can be calibrated using a polygonal frame. As shown in FIG2 , lanes S1 and S2 in the road monitoring image are calibrated. After the lanes S1 and S2 in the road monitoring image are calibrated, the driving directions of the lanes S1 and S2 can be determined. Specifically, the driving directions of the lanes can be determined according to the positions of the lanes. For example, in FIG2 , lane S2 is located on the right side of lane S1. Since the vehicle is driving on the right, it can be determined that the driving direction of lane S2 is away from the camera position. Lane S1 is located on the left side of lane S2.
  • the driving direction of lane S1 is close to the camera position.
  • some lanes in the road monitoring image are marked with lanes for indicating lane driving. If there is a direction arrow, you can directly determine the driving direction of the lane based on the direction indicated by the arrow.
  • lane S1 and lane S2 have opposite driving directions, so the driving direction of lane S1 can be recorded as 1, and the driving direction of lane S2 can be recorded as -1, where 1 represents the forward direction and -1 represents the reverse direction.
  • the vehicle detection frame in the road monitoring image can be identified.
  • the vehicle detection frame can mark the location of the vehicle, and each vehicle detection frame uniquely corresponds to a vehicle.
  • a vehicle detection model can be pre-trained, and the vehicle detection model can be used to identify the vehicle detection frame in the road monitoring image.
  • the input of the vehicle detection model is the road monitoring image, and the output is the vehicle detection frame.
  • multiple vehicle detection frames S3 are identified in the road monitoring image shown in FIG. 2.
  • an initial vehicle detection model can be established, and the vehicle detection model can be trained using multiple sets of training data.
  • Each set of training data includes a road monitoring image and a vehicle detection frame in the road monitoring image. After training the vehicle detection model using multiple sets of training data, a vehicle detection model can be obtained.
  • Step 102 for each vehicle detection frame, determine the orientation of the vehicle according to the vehicle detection frame, and judge whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located;
  • a pre-trained vehicle attribute model can be used to determine the orientation of the vehicle, and a vehicle image can be cropped from a road monitoring image according to a vehicle detection frame.
  • the vehicle image is input into a pre-trained vehicle attribute model, and the front or rear of the vehicle can be output.
  • the result output by the vehicle attribute model is the one of the front and rear that is closer to the camera position.
  • the vehicle attribute model outputs the front of the vehicle, it means that the front of the vehicle is closer to the camera position and the rear of the vehicle is away from the camera position, then the direction of the vehicle can be determined; if the vehicle attribute model outputs the rear of the vehicle, it means that the rear of the vehicle is closer to the camera position and the front of the vehicle is away from the camera position, then the direction of the vehicle can be determined.
  • an initial vehicle attribute model is established; multiple sets of training data are input into the initial vehicle attribute model for training to obtain the vehicle attribute model, each set of training data includes a vehicle image and one of the front and rear of the vehicle close to the camera position.
  • the output of the vehicle attribute model is the rear of the vehicle, it can be determined that the vehicle is facing away from the camera position, and the vehicle's direction can be determined based on this. Combined with the driving direction of the lane where the vehicle is located, it can be determined whether the driving direction of the lane and the vehicle's direction are consistent. If the output of the vehicle attribute model is the front of the vehicle, it can be determined that the vehicle is facing close to the camera position, and the vehicle's direction can be determined based on this. Combined with the driving direction of the lane where the vehicle is located, it can be determined whether the driving direction of the lane and the vehicle's direction are consistent.
  • the driving direction of the lane and the vehicle are consistent; if the vehicle is heading close to the camera position and the driving direction of the lane where the vehicle is located is away from the camera direction, then the driving direction of the lane and the vehicle are inconsistent; if the vehicle is heading away from the camera position and the driving direction of the lane where the vehicle is located is close to the camera direction, then the driving direction of the lane and the vehicle are inconsistent; if the vehicle is heading close to the camera position and the driving direction of the lane where the vehicle is located is close to the camera direction, then the driving direction of the lane and the vehicle are consistent; if the vehicle is heading close to the camera position and the driving direction of the lane where the vehicle is located is close to the camera direction, then the driving direction of the lane and the vehicle are consistent.
  • Step 103 When the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, determine whether the vehicle is reversing according to the driving direction of the vehicle; when the direction of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located, determine that the vehicle is traveling in the reverse direction.
  • a warning list can be established, and the warning list includes the vehicle information of the reversing vehicle, and the warning list can be sent to the traffic control personnel in real time or periodically.
  • a corresponding tracking mark is assigned to each vehicle; for each tracking mark corresponding to the vehicle, a vehicle detection frame corresponding to the tracking mark in the latest N frames of road monitoring images is obtained, and a preset position point of the vehicle detection frame is used as a track point to obtain a plurality of track points; the plurality of track points are connected to obtain the track of the vehicle; and a track is obtained between the track of the vehicle and the driving direction of the lane where the vehicle is located.
  • a target tracking algorithm such as a sort algorithm can be used to track the vehicle detection frame.
  • a Kalman filter algorithm is used to predict the position of the vehicle.
  • the previous state value (such as the position of the vehicle detection frame in the previous frame image) and the current state measurement value (such as the position of the vehicle detection frame identified in the current frame image) are used to predict the estimated value of the next state (such as predicting the position of the vehicle detection frame in the next frame image), to achieve a pre-judgment of the vehicle position, and the prediction result is matched with the target detection result at the next moment (such as the actual position of the vehicle detection frame identified in the next frame image) by the Hungarian algorithm.
  • the vehicle detection frame predicted by tracking the previous frame image is associated with the vehicle detection frame detected in the next frame image.
  • the vehicle detection frame detected in the next frame image can be used to represent the successfully tracked vehicle detection frame, thereby completing the tracking of the vehicle and associating the vehicle detection frames in different frames.
  • the position of vehicle detection frame A in the kth frame image and the position of vehicle detection frame A in the k+1th frame image are used to predict the position of vehicle detection frame A in the k+2th frame image to obtain a prediction result; the position of vehicle detection frame B in the k+2th frame image is identified, and the prediction result is matched with the position of vehicle detection frame B using the Hungarian algorithm.
  • vehicle detection frame B can be associated with vehicle detection frame A, and it is considered that vehicle detection frame B and vehicle detection frame A are the vehicle detection frames corresponding to the same vehicle, thereby completing the tracking of the vehicle in different frame images.
  • N in order to improve the accuracy of the vehicle motion state assessment, N may be greater than or equal to 20, for example, 20, 25, 30, 35, 40, etc.
  • a corresponding tracking mark is assigned to each vehicle.
  • the corresponding relationship between the tracking mark and the vehicle detection frame identified in the road monitoring image can be realized through the target tracking algorithm.
  • a tracking mark D is assigned to vehicle C.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 1st frame to the 25th frame (the latest 25 frames), and the preset position point of each vehicle detection frame is determined respectively.
  • the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points. These 25 trajectory points are connected to obtain the vehicle trajectory at the first moment.
  • the value of the preset reversing judgment counter is increased by 1; at the second moment, 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 2nd frame to the 26th frame image (the latest 25 frames of images), and the preset position points of each vehicle detection frame are respectively determined, and the preset position points of each vehicle detection frame are used as a trajectory point to obtain 25 trajectory points, and these 25 trajectory points are connected to obtain the vehicle trajectory at the second moment.
  • the value of the preset reversing judgment counter is increased by 1. ;
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 3rd frame to the 27th frame image (the latest 25 frames of images), and the preset position point of each vehicle detection frame is determined respectively, and the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points, and these 25 trajectory points are connected to obtain the vehicle trajectory at the third moment.
  • the value of the preset reversing judgment counter is increased by 1; and so on; when the value of the preset reversing judgment counter is greater than or equal to M, if the length of the trajectory of the vehicle is greater than a preset second threshold, it is judged that the vehicle is reversing.
  • the preset position point may be the center point of the vehicle detection frame.
  • the preset position point is not limited to the center point of the vehicle detection frame, but may also be other position points of the vehicle detection frame, such as the vertex of the vehicle detection frame.
  • the determination of the length of the vehicle's trajectory is intended to avoid misjudging the situation in which the vehicle remains stationary as a vehicle reversing.
  • the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is 180°.
  • the vehicle is determined to be in a reversing state to be determined only when the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is 180°; the vehicle can be determined to be in a reversing state to be determined when the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is greater than or equal to a preset first threshold.
  • the first threshold can be greater than or equal to 85°, such as 120°, 121°, 122°, 123°, 124°, 125°, etc.
  • the second threshold can be represented by pixels of the image, for example, greater than or equal to 100 pixels, such as It is 110 pixels, 120 pixels, 130 pixels, 140 pixels, 150 pixels, 160 pixels, 170 pixels, 180 pixels, etc.
  • an alarm message about the vehicle can be sent to the traffic control personnel.
  • a warning list can be established, which includes the vehicle information of the reversing vehicle, and the warning list can be sent to the traffic control personnel in real time or periodically.
  • tracking marks may disappear due to factors such as obstructions. After the obstructions disappear, new tracking marks will appear. The jump of tracking marks will cause discontinuity of vehicle trajectory points, making it impossible to accurately determine the direction of vehicle movement. It is necessary to reconnect the vehicle trajectory points when vehicle tracking fails or the tracking mark jumps.
  • the vehicle detection frame in the current frame road monitoring image can be identified; the vehicle detection frame is matched with the vehicle detection frame corresponding to the existing tracking mark. If the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking mark is greater than the preset third threshold, that is, the newly added tracking mark and the tracking mark that disappears at the same time can be matched, then the newly added tracking mark is replaced with the existing tracking mark, and the corresponding relationship between the vehicle detection frame in the current frame road monitoring image and the existing tracking mark is established.
  • the Hungarian algorithm can be used to match the vehicle detection frame with the vehicle detection frame corresponding to the existing tracking mark. This can solve the problem of the disappearance of the vehicle track when the tracking mark jumps, which helps to accurately determine the direction of vehicle movement.
  • This embodiment can also determine whether the vehicle is in a parking state. Specifically, when the length of the trajectory of the vehicle is less than or equal to a preset second threshold, the vehicle is determined to be in a parking state to be determined; when the vehicle is determined to be in a parking state to be determined for at least K consecutive times, the vehicle is determined to be parked; wherein K is an integer greater than 1.
  • a tracking mark D is assigned to vehicle C.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 1st frame to the 25th frame image (the latest 25 frames of images), and the preset position point of each vehicle detection frame is determined respectively.
  • the preset position point of each vehicle detection frame is used as a track point to obtain 25 track points. These 25 track points are connected to obtain The vehicle trajectory at the first moment is obtained.
  • the value of the preset parking judgment counter is increased by 1.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 2nd frame to the 26th frame image (the latest 25 frames of images), and the preset position points of each vehicle detection frame are respectively determined.
  • the preset position points of each vehicle detection frame are used as a trajectory point to obtain 25 trajectory points.
  • the 25 trajectory points are connected to obtain the vehicle trajectory at the second moment.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 3rd frame to the 27th frame image (the latest 25 frames of images), and the preset position point of each vehicle detection frame is determined respectively, and the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points, and these 25 trajectory points are connected to obtain the vehicle trajectory at the third moment.
  • the value of the preset parking judgment counter is increased by 1; and so on; when the value of the preset parking judgment counter is greater than or equal to K, it is judged that the vehicle is parked. After it is judged that the vehicle is parked, the starting parking time of the vehicle can be updated to the current moment.
  • the orientation of the vehicle is determined based on the vehicle detection frame. First, it is determined whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located. When the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, it is determined whether the vehicle is reversing based on the driving direction of the vehicle. This can avoid confusing the two behaviors of reversing and reverse driving, and can improve the recognition accuracy of reversing behavior.
  • the embodiment of the present disclosure further provides a reverse detection device, as shown in FIG3 , comprising:
  • the recognition module 21 is used to obtain a road monitoring image and recognize a vehicle detection frame in the road monitoring image;
  • the road conditions may be photographed by roadside cameras to obtain a road monitoring image as shown in FIG. 2 .
  • the road monitoring image includes but is not limited to monitoring images of expressways, monitoring images of roads within a city, and the like.
  • the lanes in the road monitoring image can be calibrated using a polygonal frame. As shown in FIG2 , lanes S1 and S2 in the road monitoring image are calibrated. After calibrating lanes S1 and S2 in the road monitoring image, the driving directions of lanes S1 and S2 can be determined. In this example, the driving directions of lane S1 and lane S2 are opposite, and the driving direction of lane S1 can be recorded as 1, the driving direction of lane S2 is recorded as -1, where 1 represents forward direction and -1 represents reverse direction.
  • the vehicle detection frame in the road monitoring image can be identified.
  • the vehicle detection frame can mark the location of the vehicle, and each vehicle detection frame uniquely corresponds to a vehicle.
  • a vehicle detection model can be pre-trained, and the vehicle detection model can be used to identify the vehicle detection frame in the road monitoring image.
  • the input of the vehicle detection model is the road monitoring image, and the output is the vehicle detection frame.
  • multiple vehicle detection frames S3 are identified in the road monitoring image shown in FIG. 2.
  • an initial vehicle detection model can be established, and the vehicle detection model can be trained using multiple sets of training data.
  • Each set of training data includes a road monitoring image and a vehicle detection frame in the road monitoring image. After training the vehicle detection model using multiple sets of training data, a vehicle detection model can be obtained.
  • a first processing module 22 is used to determine the orientation of the vehicle according to each vehicle detection frame, and judge whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located;
  • a pre-trained vehicle attribute model can be used to determine the orientation of the vehicle, and a vehicle image can be cropped from a road monitoring image according to a vehicle detection frame.
  • the vehicle image is input into a pre-trained vehicle attribute model, and the front or rear of the vehicle can be output.
  • the result output by the vehicle attribute model is the one of the front and rear that is closer to the camera position.
  • the vehicle attribute model outputs the front of the vehicle, it means that the front of the vehicle is closer to the camera position and the rear of the vehicle is away from the camera position, then the direction of the vehicle can be determined; if the vehicle attribute model outputs the rear of the vehicle, it means that the rear of the vehicle is closer to the camera position and the front of the vehicle is away from the camera position, then the direction of the vehicle can be determined.
  • an initial vehicle attribute model is established; multiple sets of training data are input into the initial vehicle attribute model for training to obtain the vehicle attribute model, each set of training data includes a vehicle image and one of the front and rear of the vehicle close to the camera position.
  • the output of the vehicle attribute model is the rear end of the vehicle, then it can be determined that the vehicle is facing away from the camera.
  • the output of the vehicle attribute model is the front of the vehicle, so it can be determined that the orientation of the vehicle is close to the camera position.
  • the orientation of the vehicle can be determined based on this. Combined with the driving direction of the lane where the vehicle is located, it can be determined whether the driving direction of the lane is consistent with the orientation of the vehicle.
  • the second processing module 23 is used to determine whether the vehicle is reversing according to the driving direction of the vehicle when the direction of the vehicle is consistent with the driving direction of the lane where the vehicle is located, and to determine that the vehicle is traveling in the reverse direction when the direction of the vehicle is inconsistent with the driving direction of the lane where the vehicle is located.
  • a warning list can be established, and the warning list includes the vehicle information of the reversing vehicle, and the warning list can be sent to the traffic control personnel in real time or periodically.
  • a corresponding tracking mark is assigned to each vehicle; for each tracking mark corresponding to the vehicle, a vehicle detection frame corresponding to the tracking mark in the latest N frames of road monitoring images is obtained, and a preset position point of the vehicle detection frame is used as a trajectory point to obtain a plurality of trajectory points; the plurality of trajectory points are connected to obtain the trajectory of the vehicle; when the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is greater than or equal to a preset first threshold, it is determined that the vehicle is in a reversing state to be determined; when the vehicle is determined to be in a reversing state to be determined for at least M consecutive times and the length of the trajectory of the vehicle is greater than a preset second threshold, it is determined that the vehicle is reversing; wherein M and N are integers greater than 1.
  • a target tracking algorithm such as a sort algorithm can be used to track the vehicle detection frame.
  • a Kalman filter algorithm is used to predict the position of the vehicle.
  • the previous state value (such as the position of the vehicle detection frame in the previous frame image) and the current state measurement value (such as the position of the vehicle detection frame identified in the current frame image) are used to predict the estimated value of the next state (such as predicting the position of the vehicle detection frame in the next frame image), thereby realizing the prediction of the vehicle position, and the prediction result is combined with the target detection result at the next moment.
  • the result (such as the actual position of the vehicle detection frame identified in the next frame image) is matched with the Hungarian algorithm.
  • the vehicle detection frame predicted by tracking the previous frame image is associated with the vehicle detection frame detected in the next frame image.
  • the vehicle detection frame detected in the next frame image can be used to represent the successfully tracked vehicle detection frame, complete the tracking of the vehicle, and associate the vehicle detection frames in different frames.
  • the position of vehicle detection frame A in the kth frame image and the position of vehicle detection frame A in the k+1th frame image are used to predict the position of vehicle detection frame A in the k+2th frame image to obtain a prediction result; the position of vehicle detection frame B in the k+2th frame image is identified, and the prediction result is matched with the position of vehicle detection frame B by the Hungarian algorithm.
  • vehicle detection frame B can be associated with vehicle detection frame A, and it is considered that vehicle detection frame B and vehicle detection frame A are vehicle detection frames corresponding to the same vehicle, thereby completing the tracking of the vehicle in different frame images.
  • N in order to improve the accuracy of vehicle motion state assessment, N may be greater than or equal to 20, for example, 20, 25, 30, 35, 40, etc.
  • a corresponding tracking mark is assigned to each vehicle.
  • the corresponding relationship between the tracking mark and the vehicle detection frame identified in the road monitoring image can be realized through the target tracking algorithm.
  • a tracking mark D is assigned to vehicle C.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 1st frame to the 25th frame (the latest 25 frames), and the preset position point of each vehicle detection frame is determined respectively.
  • the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points. These 25 trajectory points are connected to obtain the vehicle trajectory at the first moment.
  • the value of the preset reversing judgment counter is increased by 1; at the second moment, from the 2nd frame to the 26th frame (the latest 25 frames), the preset position point of each vehicle detection frame is determined ...
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified, and the preset position point of each vehicle detection frame is determined respectively.
  • the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points.
  • the 25 trajectory points are connected to obtain the vehicle trajectory at the second moment.
  • the value of the preset reversing judgment counter is increased by 1; at the third moment, 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 3rd frame to the 27th frame image (the latest 25 frames of images), and the preset position point of each vehicle detection frame is determined respectively.
  • the determination of the length of the vehicle's trajectory is intended to avoid misjudging the situation in which the vehicle remains stationary as a vehicle reversing.
  • the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is 180°.
  • the vehicle is determined to be in a reversing state to be determined only when the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is 180°; the vehicle can be determined to be in a reversing state to be determined when the angle between the trajectory of the vehicle and the driving direction of the lane where the vehicle is located is greater than or equal to a preset first threshold.
  • the first threshold can be greater than or equal to 85°, such as 120°, 121°, 122°, 123°, 124°, 125°, etc.
  • the second threshold can be represented by pixels of the image, for example, greater than or equal to 100 pixels, such as 110 pixels, 120 pixels, 130 pixels, 140 pixels, 150 pixels, 160 pixels, 170 pixels, 180 pixels, etc.
  • an alarm message about the vehicle can be sent to the traffic control personnel.
  • a warning list can be established, which includes the vehicle information of the reversing vehicle, and the warning list can be sent to the traffic control personnel in real time or periodically.
  • tracking marks may disappear due to factors such as obstructions. After the obstructions disappear, new tracking marks will appear. The jump of tracking marks will cause discontinuity of vehicle trajectory points, making it impossible to accurately determine the direction of vehicle movement. It is necessary to reconnect the vehicle trajectory points when vehicle tracking fails or the tracking mark jumps.
  • the newly added tracking mark With the tracking mark that disappears at the same time. Specifically, Identify the vehicle detection frame in the road monitoring image of the current frame; match the vehicle detection frame with the vehicle detection frame corresponding to the existing tracking mark. If the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking mark is greater than the preset third threshold, that is, the newly added tracking mark can match the tracking mark that disappears at the same time, then replace the newly added tracking mark with the existing tracking mark, and establish the corresponding relationship between the vehicle detection frame in the road monitoring image of the current frame and the existing tracking mark. This can solve the problem of the disappearance of the vehicle track when the tracking mark jumps, which helps to accurately determine the direction of vehicle movement.
  • This embodiment can also determine whether the vehicle is in a parking state. Specifically, when the length of the trajectory of the vehicle is less than or equal to a preset second threshold, the vehicle is determined to be in a parking state to be determined; when the vehicle is determined to be in a parking state to be determined for at least K consecutive times, the vehicle is determined to be parked; wherein K is an integer greater than 1.
  • a tracking mark D is assigned to vehicle C.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 1st frame to the 25th frame image (the latest 25 frame image), and the preset position point of each vehicle detection frame is determined respectively.
  • the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points. These 25 trajectory points are connected to obtain the vehicle trajectory at the first moment.
  • the value of the preset parking judgment counter is increased by 1.
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 2nd frame to the 26th frame image (the latest 25 frame image), and the preset position point of each vehicle detection frame is determined respectively.
  • the preset position point of each vehicle detection frame is used as a trajectory point to obtain 25 trajectory points. These 25 trajectory points are connected to obtain the vehicle trajectory at the second moment.
  • the value of the preset parking judgment counter is increased by 1;
  • 25 vehicle detection frames corresponding to the tracking mark D are respectively identified from the 3rd frame to the 27th frame image (the latest 25 frames of images), and the preset position points of each vehicle detection frame are respectively determined.
  • the preset position points of each vehicle detection frame are used as a trajectory point to obtain 25 trajectory points. These 25 trajectory points are connected to obtain the vehicle trajectory at the third moment.
  • the value of the preset parking judgment counter is increased by 1; and so on; when the value of the preset parking judgment counter is greater than or equal to K, it is judged that the vehicle is parked. After it is judged that the vehicle is parked, the starting parking time of the vehicle can be updated to the current time.
  • the orientation of the vehicle is determined based on the vehicle detection frame. First, it is determined whether the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located. When the orientation of the vehicle is consistent with the driving direction of the lane where the vehicle is located, it is determined whether the vehicle is reversing based on the driving direction of the vehicle. This can avoid confusing the two behaviors of reversing and reverse driving, and can improve the recognition accuracy of reversing behavior.
  • the first processing module 22 includes:
  • a determination unit used to determine the lane where the center point of the vehicle detection frame is located
  • the acquisition unit is used to acquire the driving direction of the lane and compare the driving direction of the lane with the orientation of the vehicle.
  • the first processing module 22 further includes:
  • a direction processing unit is used to input the vehicle detection frame into a pre-trained vehicle attribute model, and the vehicle detection frame is input into a pre-trained vehicle attribute model, and the result output by the vehicle attribute model is one of the front and the rear of the vehicle closer to the camera position.
  • the device further comprises:
  • the training module is used to establish an initial vehicle attribute model; multiple sets of training data are input into the initial vehicle attribute model for training to obtain the vehicle attribute model, each set of training data includes a vehicle image and one of the front and rear of the vehicle close to the camera position.
  • the second processing module 23 includes:
  • an allocating unit for allocating a corresponding tracking identifier to each vehicle
  • a trajectory acquisition unit is used to acquire, for each tracking mark corresponding to the vehicle, a vehicle detection frame corresponding to the tracking mark in the latest N frames of road monitoring images, and use a preset position point of the vehicle detection frame as a trajectory point to obtain a plurality of trajectory points; and connect the plurality of trajectory points to obtain a trajectory of the vehicle;
  • a judgment unit configured to determine that the vehicle is in a reversing state to be determined when an angle between a trajectory of the vehicle and a driving direction of a lane in which the vehicle is located is greater than or equal to a preset first threshold
  • a reversing judgment unit configured to determine that the vehicle is reversing when the vehicle is determined to be in a reversing state to be determined for at least M consecutive times and when the length of the trajectory of the vehicle is greater than a preset second threshold;
  • M and N are integers greater than 1.
  • the second processing module further includes:
  • a parking determination unit configured to determine that the vehicle is in a parking state to be determined when the length of the trajectory of the vehicle is less than or equal to a preset second threshold value; and determine that the vehicle is parked when the vehicle is in a parking state to be determined for at least K consecutive times;
  • K is an integer greater than 1.
  • the device further comprises:
  • the matching module is used to identify a vehicle detection frame in a current frame road monitoring image; match the vehicle detection frame with a vehicle detection frame corresponding to an existing tracking mark, and if the matching degree between the vehicle detection frame and the vehicle detection frame corresponding to the existing tracking mark is greater than a preset third threshold, establish a corresponding relationship between the vehicle detection frame in the current frame road monitoring image and the existing tracking mark.
  • An embodiment of the present disclosure further provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored on which a program or instruction is stored.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk.
  • An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes of the above-mentioned reversing detection method embodiment, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
  • the embodiment of the present application further provides a computer program/program product, which is stored in a storage medium.
  • the computer program/program product is executed by at least one processor to implement the various processes of the above-mentioned reversing detection method embodiment, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for enabling a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种倒车检测方法及装置,属于图像处理技术领域。倒车检测方法,包括:获取道路监控图像,识别所述道路监控图像中的车辆检测框;对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致;在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的车道的行驶方向不一致时,确定所述车辆逆向行驶。本公开能够提高倒车行为的识别准确率。

Description

倒车检测方法及装置
相关申请的交叉引用
本申请主张在2022年11月11日在中国提交的中国专利申请号No.202211415757.8的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及图像处理技术领域,特别是指一种倒车检测方法及装置。
背景技术
随着机动车保有量的迅速增长,以及部分驾驶人员规范驾驶意识比较薄弱,在高速公路上时有发生车辆违章倒车的行为,该行为大大降低了道路的通行能力,容易造成交通事故,产生人员财产损失。
目前可以通过现场警力和协管人员等进行现场监督以及通过道路的监控视频来识别倒车行为,但这种方式会耗费大量的人力资源;随着计算机视觉的发展,计算机视觉技术逐渐在交通监控领域得到了快速的发展,通过计算机视觉技术对倒车行为进行识别检测,能够节约警力资源,但现有计算机视觉技术识别倒车行为的准确率较低。
发明内容
本公开实施例提供一种倒车检测方法及装置,能够提高倒车行为的识别准确率。
本公开的实施例提供技术方案如下:
一方面,提供一种倒车检测方法,包括:
获取道路监控图像,识别所述道路监控图像中的车辆检测框;
对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致;
在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的 车道的行驶方向不一致时,确定所述车辆逆向行驶。
一些实施例中,所述判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致包括:
确定所述车辆检测框的中心点所在的车道;
获取所述车道的行驶方向,比对所述车道的行驶方向和所述车辆的朝向。
一些实施例中,所述根据所述车辆检测框确定车辆的朝向包括:
将所述车辆检测框输入预先训练的车辆属性模型,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
一些实施例中,所述方法还包括训练所述车辆属性模型的步骤,所述训练所述车辆属性模型的步骤包括:
建立初始车辆属性模型;
将多组训练数据输入所述初始车辆属性模型进行训练,得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
一些实施例中,所述根据所述车辆的行驶方向判断所述车辆是否倒车包括:
为每一车辆分配一对应的跟踪标识;
针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;
将所述多个轨迹点进行连线,得到所述车辆的轨迹;
在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;
在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;
其中,M,N为大于1的整数。
一些实施例中,所述方法还包括:
在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆 处于待判定停车状态;
在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;
其中,K为大于1的整数。
一些实施例中,N大于或等于20。
一些实施例中,获取每帧图像中与所述跟踪标识对应的车辆检测框包括:
识别当前帧道路监控图像中的车辆检测框;
将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系。
本公开的实施例还提供了一种倒车检测装置,包括:
识别模块,用于获取道路监控图像,识别所述道路监控图像中的车辆检测框;
第一处理模块,用于对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致;
第二处理模块,用于在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的车道的行驶方向不一致时,确定所述车辆逆向行驶。
一些实施例中,所述第一处理模块包括:
确定单元,用于确定所述车辆检测框的中心点所在的车道;
获取单元,用于获取所述车道的行驶方向,比对所述车道的行驶方向和所述车辆的朝向。
一些实施例中,所述第一处理模块还包括:
朝向处理单元,用于将所述车辆检测框输入预先训练的车辆属性模型,将所述车辆检测框输入预先训练的车辆属性模型,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
一些实施例中,所述装置还包括:
训练模块,用于建立初始车辆属性模型;将多组训练数据输入所述初始车辆属性模型进行训练,得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
一些实施例中,所述第二处理模块包括:
分配单元,用于为每一车辆分配一对应的跟踪标识;
轨迹获取单元,用于针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;将所述多个轨迹点进行连线,得到所述车辆的轨迹;
判断单元,用于在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;
倒车判断单元,用于在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;
其中,M,N为大于1的整数。
一些实施例中,所述第二处理模块还包括:
停车判断单元,用于在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆处于待判定停车状态;在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;
其中,K为大于1的整数。
一些实施例中,所述装置还包括:
匹配模块,用于识别当前帧道路监控图像中的车辆检测框;将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系。
本公开的实施例还提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如上所述的倒车检测方法的步骤。
本公开的实施例具有以下有益效果:
上述方案中,根据车辆检测框确定车辆的朝向,首先判断车辆的朝向与车辆所在车道的行驶方向是否一致,在车辆的朝向与车辆所在车道的行驶方向一致时,才根据车辆的行驶方向判断车辆是否倒车,这样可以避免混淆车辆倒车与逆向行驶两种行为,能够提高倒车行为的识别准确率。
附图说明
图1为本公开实施例倒车检测方法的流程示意图;
图2为本公开实施例识别道路监控图像中的车辆检测框的示意图;
图3为本公开实施例倒车检测装置的结构示意图。
具体实施方式
为使本公开的实施例要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。
本公开实施例提供一种倒车检测方法及装置,能够提高倒车行为的识别准确率。
本公开的实施例提供一种倒车检测方法,如图1所示,包括:
步骤101:获取道路监控图像,识别所述道路监控图像中的车辆检测框;
具体地,可以通过路边摄像头对道路情况进行拍摄,获取如图2所示的道路监控图像,道路监控图像包括但不限于高速公路的监控图像、城市内道路的监控图像等。
在获取道路监控图像后,可以采用多边形框对道路监控图像中的车道进行标定,如图2所示,标定出道路监控图像中的车道S1和S2,在标定出道路监控图像中的车道S1和S2后,可以确定车道S1和S2的行驶方向,具体地,可以根据车道的位置确定车道的行驶方向,比如,图2中,车道S2位于车道S1的右侧,由于车辆是靠右行驶,则可以确定车道S2的行驶方向为远离摄像位置,车道S1位于车道S2的左侧,则可以确定车道S1的行驶方向为靠近摄像位置;或者,一些道路监控图像中车道上标记有用于指示车道行驶 方向的箭头,则可以直接根据箭头指示的方向确定车道的行驶方向。
本示例中,车道S1与车道S2的行驶方向相反,可以将车道S1的行驶方向记录为1,将车道S2的行驶方向记录为-1,其中,1表示正向,-1表示反向。
在获取道路监控图像后,可以识别出道路监控图像中的车辆检测框,车辆检测框可以标定出车辆所在位置,每一车辆检测框唯一对应一车辆。具体地,可以预先训练一车辆检测模型,利用车辆检测模型识别道路监控图像中的车辆检测框,车辆检测模型的输入为道路监控图像,输出为车辆检测框,一具体示例中,在如图2所示的道路监控图像中识别出多个车辆检测框S3。
在训练车辆检测模型时,可以建立初始车辆检测模型,利用多组训练数据对车辆检测模型进行训练,每组训练数据包括道路监控图像和道路监控图像中的车辆检测框,在利用多组训练数据对车辆检测模型进行训练后,可以得到车辆检测模型。
步骤102:对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致;
具体地,可以利用预先训练的车辆属性模型确定车辆的朝向,根据车辆检测框从道路监控图像中裁剪出车辆图像,将车辆图像输入预先训练的车辆属性模型,可以输出所述车辆的车头或车尾,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
如果车辆属性模型输出为车辆的车头,则表明车辆的车头更靠近摄像位置,车辆的车尾远离摄像位置,则可以确定车辆的朝向;如果车辆属性模型输出为车辆的车尾,则表明车辆的车尾更靠近摄像位置,车辆的车头远离摄像位置,则可以确定车辆的朝向。
具体地,在训练所述车辆属性模型时,建立初始车辆属性模型;将多组训练数据输入所述初始车辆属性模型进行训练,可以得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
在判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致时,首先确定所述车辆检测框的中心点所在的车道,然后获取所述车道的行驶方向, 比对所述车道的行驶方向和所述车辆的朝向。
比如,车辆属性模型的输出为车尾,则可以确定车辆的朝向为背离摄像位置,可以据此确定车辆的朝向,结合车辆所在车道的行驶方向,就可以确定车道的行驶方向和所述车辆的朝向是否一致。车辆属性模型的输出为车头,则可以确定车辆的朝向为靠近摄像位置,可以据此确定车辆的朝向,结合车辆所在车道的行驶方向,就可以确定车道的行驶方向和所述车辆的朝向是否一致。
一具体示例中,车辆的朝向为背离摄像位置,车辆所在车道的行驶方向为背离摄像方向,则车道的行驶方向和所述车辆的朝向一致;车辆的朝向为靠近摄像位置,车辆所在车道的行驶方向为背离摄像方向,则车道的行驶方向和所述车辆的朝向不一致;车辆的朝向为背离摄像位置,车辆所在车道的行驶方向为靠近摄像方向,则车道的行驶方向和所述车辆的朝向不一致;车辆的朝向为靠近摄像位置,车辆所在车道的行驶方向为靠近摄像方向,则车道的行驶方向和所述车辆的朝向一致。
步骤103:在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的车道的行驶方向不一致时,确定所述车辆逆向行驶。
在判断车道的行驶方向与车辆的朝向不一致时,无需进行下一步判断,直接可以确定车辆倒车,可以向交管人员发送关于该车辆的报警信息。具体地,可以建立一警告列表,警告列表中包括倒车车辆的车辆信息,可以实时或者周期性向交管人员发送该警告列表。在判断车道的行驶方向与车辆的朝向不一致后,判断该车辆的车辆信息是否位于警告列表中,如果该车辆的车辆信息不在警告列表中,则将该车辆的车辆信息添加在警告列表中。
具体地,在根据所述车辆的行驶方向判断所述车辆是否倒车时,为每一车辆分配一对应的跟踪标识;针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;将所述多个轨迹点进行连线,得到所述车辆的轨迹;在所述车辆的轨迹与所述车辆所在车道的行驶方向之 间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;其中,M,N为大于1的整数。
本实施例中,可以利用目标跟踪算法比如sort算法跟踪车辆检测框,具体地,采用卡尔曼滤波算法预测车辆的位置,利用前一状态值(比如前一帧图像中车辆检测框的位置)和当前状态测量值(比如当前帧图像中识别出的车辆检测框的位置),预测下一状态的估计值(比如预测下一帧图像中车辆检测框的位置),实现车辆位置的预判,并将预判结果与下一时刻的目标检测结果(比如下一帧图像识别出的车辆检测框的实际位置)做匈牙利算法匹配,根据匹配结果实现根据前一帧图像跟踪预测的车辆检测框与后一帧图像中检测出的车辆检测框关联,这样可以用后一帧图像中检测出的车辆检测框代表成功跟踪的车辆检测框,完成对车辆的跟踪,将不同帧中的车辆检测框关联起来。比如对于车辆检测框A,利用第k帧图像中车辆检测框A的位置和第k+1帧图像中车辆检测框A的位置预测第k+2帧图像中车辆检测框A的位置,得到一预测结果;识别出第k+2帧图像中车辆检测框B的位置,将预测结果与车辆检测框B的位置做匈牙利算法匹配,如果预测结果与车辆检测框B的位置之间的匹配度大于设定的匹配度阈值,可以将车辆检测框B与车辆检测框A关联,认为车辆检测框B与车辆检测框A是同一车辆对应的车辆检测框,从而完成对车辆在不同帧图像中的跟踪。
本实施例中,为了提升车辆运动状态评估的准确性,N可以大于或等于20,比如可以为20、25、30、35、40等。
以N为25为例,为每一车辆分配一对应的跟踪标识,通过目标跟踪算法可以实现跟踪标识与道路监控图像中识别出的车辆检测框的对应关系。比如为车辆C分配跟踪标识D,在第一时刻,从第1帧到第25帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第一时刻的车辆轨迹, 在第一时刻的车辆的轨迹与车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,预设的倒车判断计数器的值加1;在第二时刻,从第2帧到第26帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第二时刻的车辆轨迹,在第二时刻的车辆的轨迹与车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,预设的倒车判断计数器的值加1;在第三时刻,从第3帧到第27帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第三时刻的车辆轨迹,在第三时刻的车辆的轨迹与车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,预设的倒车判断计数器的值加1;依次类推;在预设的倒车判断计数器的值大于或等于M时,如果所述车辆的轨迹的长度大于预设的第二阈值时,判断车辆倒车。
本实施例中,预设位置点可以为车辆检测框的中心点,当然,预设位置点并不局限为车辆检测框的中心点,还可以为车辆检测框的其他位置点,比如为车辆检测框的顶点。
其中,对于车辆的轨迹的长度的判断是为了避免将车辆停留在原地不动的情况误判为车辆倒车。
在所述车辆倒车且所述车辆的轨迹与所述车辆所在车道的行驶方向平行时,所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角为180°,为了提升车辆运动状态评估的准确性,并不一定要求在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角为180°时才判定车辆处于待判定倒车状态;可以在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,确定所述车辆处于待判定倒车状态,第一阈值可以大于或等于85°,比如可以为120°、121°、122°、123°、124°、125°等。第二阈值可以用图像的像素来表示,比如说大于或等于100个像素,比如可以 为110个像素、120个像素、130个像素、140个像素、150个像素、160个像素、170个像素、180个像素等。
在确定车辆倒车后,可以向交管人员发送关于该车辆的报警信息。具体地,可以建立一警告列表,警告列表中包括倒车车辆的车辆信息,可以实时或者周期性向交管人员发送该警告列表。在判断车道的行驶方向与车辆的朝向不一致后,判断该车辆的车辆信息是否位于警告列表中,如果该车辆的车辆信息不在警告列表中,则将该车辆的车辆信息添加在警告列表中。
在跟踪过程中,由于遮挡物等因素,会出现一些跟踪标识的消失,在遮挡物消失之后,又会出现新的跟踪标识,跟踪标识的跳变会导致车辆轨迹点的不连续,无法准确判断车辆运动方向,需要在车辆跟踪失败、跟踪标识跳变时实现车辆轨迹点的重连。
需要对新增的跟踪标识与同时消失的跟踪标识进行匹配,具体地,可以识别当前帧道路监控图像中的车辆检测框;将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,即新增的跟踪标识与同时消失的跟踪标识能够匹配上,则将新增的跟踪标识替换为已有的跟踪标识,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系,具体地,可以利用匈牙利算法将车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配。这样可以解决跟踪标识发生跳变时导致车辆轨迹消失的问题,有助于准确判断车辆运动方向。
本实施例还可以判断车辆是否处于停车状态,具体地,在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆处于待判定停车状态;在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;其中,K为大于1的整数。
以N为25为例,为车辆C分配跟踪标识D,在第一时刻,从第1帧到第25帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获 得第一时刻的车辆轨迹,在第一时刻的车辆的轨迹的长度小于预设的第二阈值时,预设的停车判断计数器的值加1;在第二时刻,从第2帧到第26帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第二时刻的车辆轨迹,在第二时刻的车辆的轨迹的长度小于预设的第二阈值时,预设的停车判断计数器的值加1;在第三时刻,从第3帧到第27帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第三时刻的车辆轨迹,在第三时刻的车辆的轨迹的长度小于预设的第二阈值时,预设的停车判断计数器的值加1;依次类推;在预设的停车判断计数器的值大于或等于K时,判断车辆停车,在判断车辆停车后,可以将车辆的起始停车时间更新为当前时刻。
本实施例中,根据车辆检测框确定车辆的朝向,首先判断车辆的朝向与车辆所在车道的行驶方向是否一致,在车辆的朝向与车辆所在车道的行驶方向一致时,才根据车辆的行驶方向判断车辆是否倒车,这样可以避免混淆车辆倒车与逆向行驶两种行为,能够提高倒车行为的识别准确率。
本公开的实施例还提供了一种倒车检测装置,如图3所示,包括:
识别模块21,用于获取道路监控图像,识别所述道路监控图像中的车辆检测框;
具体地,可以通过路边摄像头对道路情况进行拍摄,获取如图2所示的道路监控图像,道路监控图像包括但不限于高速公路的监控图像、城市内道路的监控图像等。
在获取道路监控图像后,可以采用多边形框对道路监控图像中的车道进行标定,如图2所示,标定出道路监控图像中的车道S1和S2,在标定出道路监控图像中的车道S1和S2后,可以确定车道S1和S2的行驶方向,本示例中,车道S1与车道S2的行驶方向相反,可以将车道S1的行驶方向记录为 1,将车道S2的行驶方向记录为-1,其中,1表示正向,-1表示反向。
在获取道路监控图像后,可以识别出道路监控图像中的车辆检测框,车辆检测框可以标定出车辆所在位置,每一车辆检测框唯一对应一车辆。具体地,可以预先训练一车辆检测模型,利用车辆检测模型识别道路监控图像中的车辆检测框,车辆检测模型的输入为道路监控图像,输出为车辆检测框,一具体示例中,在如图2所示的道路监控图像中识别出多个车辆检测框S3。
在训练车辆检测模型时,可以建立初始车辆检测模型,利用多组训练数据对车辆检测模型进行训练,每组训练数据包括道路监控图像和道路监控图像中的车辆检测框,在利用多组训练数据对车辆检测模型进行训练后,可以得到车辆检测模型。
第一处理模块22,用于对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致;
具体地,可以利用预先训练的车辆属性模型确定车辆的朝向,根据车辆检测框从道路监控图像中裁剪出车辆图像,将车辆图像输入预先训练的车辆属性模型,可以输出所述车辆的车头或车尾,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
如果车辆属性模型输出为车辆的车头,则表明车辆的车头更靠近摄像位置,车辆的车尾远离摄像位置,则可以确定车辆的朝向;如果车辆属性模型输出为车辆的车尾,则表明车辆的车尾更靠近摄像位置,车辆的车头远离摄像位置,则可以确定车辆的朝向。
具体地,在训练所述车辆属性模型时,建立初始车辆属性模型;将多组训练数据输入所述初始车辆属性模型进行训练,可以得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
在判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致时,首先确定所述车辆检测框的预设位置点所在的车道,然后获取所述车道的行驶方向,比对所述车道的行驶方向和所述车辆的朝向。
比如,车辆属性模型的输出为车尾,则可以确定车辆的朝向为背离摄像 位置,可以据此确定车辆的朝向,结合车辆所在车道的行驶方向,就可以确定车道的行驶方向和所述车辆的朝向是否一致。车辆属性模型的输出为车头,则可以确定车辆的朝向为靠近摄像位置,可以据此确定车辆的朝向,结合车辆所在车道的行驶方向,就可以确定车道的行驶方向和所述车辆的朝向是否一致。
第二处理模块23,用于在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的车道的行驶方向不一致时,确定所述车辆逆向行驶。
在判断车道的行驶方向与车辆的朝向不一致时,无需进行下一步判断,直接可以确定车辆倒车,可以向交管人员发送关于该车辆的报警信息。具体地,可以建立一警告列表,警告列表中包括倒车车辆的车辆信息,可以实时或者周期性向交管人员发送该警告列表。在判断车道的行驶方向与车辆的朝向不一致后,判断该车辆的车辆信息是否位于警告列表中,如果该车辆的车辆信息不在警告列表中,则将该车辆的车辆信息添加在警告列表中。
具体地,在根据所述车辆的行驶方向判断所述车辆是否倒车时,为每一车辆分配一对应的跟踪标识;针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;将所述多个轨迹点进行连线,得到所述车辆的轨迹;在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;其中,M,N为大于1的整数。
本实施例中,可以利用目标跟踪算法比如sort算法跟踪车辆检测框,具体地,采用卡尔曼滤波算法预测车辆的位置,利用前一状态值(比如前一帧图像中车辆检测框的位置)和当前状态测量值(比如当前帧图像中识别出的车辆检测框的位置),预测下一状态的估计值(比如预测下一帧图像中车辆检测框的位置),实现车辆位置的预判,并将预判结果与下一时刻的目标检测结 果(比如下一帧图像识别出的车辆检测框的实际位置)做匈牙利算法匹配,根据匹配结果实现根据前一帧图像跟踪预测的车辆检测框与后一帧图像中检测出的车辆检测框关联,这样可以用后一帧图像中检测出的车辆检测框代表成功跟踪的车辆检测框,完成对车辆的跟踪,将不同帧中的车辆检测框关联起来。比如对于车辆检测框A,利用第k帧图像中车辆检测框A的位置和第k+1帧图像中车辆检测框A的位置预测第k+2帧图像中车辆检测框A的位置,得到一预测结果;识别出第k+2帧图像中车辆检测框B的位置,将预测结果与车辆检测框B的位置做匈牙利算法匹配,如果预测结果与车辆检测框B的位置之间的匹配度大于设定的匹配度阈值,可以将车辆检测框B与车辆检测框A关联,认为车辆检测框B与车辆检测框A是同一车辆对应的车辆检测框,从而完成对车辆在不同帧图像中的跟踪。
本实施例中,为了提升车辆运动状态评估的准确性,N可以大于或等于20,比如可以为20、25、30、35、40等。
以N为25为例,为每一车辆分配一对应的跟踪标识,通过目标跟踪算法可以实现跟踪标识与道路监控图像中识别出的车辆检测框的对应关系。比如为车辆C分配跟踪标识D,在第一时刻,从第1帧到第25帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第一时刻的车辆轨迹,在第一时刻的车辆的轨迹与车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,预设的倒车判断计数器的值加1;在第二时刻,从第2帧到第26帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第二时刻的车辆轨迹,在第二时刻的车辆的轨迹与车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,预设的倒车判断计数器的值加1;在第三时刻,从第3帧到第27帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预 设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第三时刻的车辆轨迹,在第三时刻的车辆的轨迹与车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,预设的倒车判断计数器的值加1;依次类推;在预设的倒车判断计数器的值大于或等于M时,如果所述车辆的轨迹的长度大于预设的第二阈值时,判断车辆倒车。
其中,对于车辆的轨迹的长度的判断是为了避免将车辆停留在原地不动的情况误判为车辆倒车。
在所述车辆倒车且所述车辆的轨迹与所述车辆所在车道的行驶方向平行时,所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角为180°,为了提升车辆运动状态评估的准确性,并不一定要求在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角为180°时才判定车辆处于待判定倒车状态;可以在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值时,确定所述车辆处于待判定倒车状态,第一阈值可以大于或等于85°,比如可以为120°、121°、122°、123°、124°、125°等。第二阈值可以用图像的像素来表示,比如说大于或等于100个像素,比如可以为110个像素、120个像素、130个像素、140个像素、150个像素、160个像素、170个像素、180个像素等。
在确定车辆倒车后,可以向交管人员发送关于该车辆的报警信息。具体地,可以建立一警告列表,警告列表中包括倒车车辆的车辆信息,可以实时或者周期性向交管人员发送该警告列表。在判断车道的行驶方向与车辆的朝向不一致后,判断该车辆的车辆信息是否位于警告列表中,如果该车辆的车辆信息不在警告列表中,则将该车辆的车辆信息添加在警告列表中。
在跟踪过程中,由于遮挡物等因素,会出现一些跟踪标识的消失,在遮挡物消失之后,又会出现新的跟踪标识,跟踪标识的跳变会导致车辆轨迹点的不连续,无法准确判断车辆运动方向,需要在车辆跟踪失败、跟踪标识跳变时实现车辆轨迹点的重连。
需要对新增的跟踪标识与同时消失的跟踪标识进行匹配,具体地,可以 识别当前帧道路监控图像中的车辆检测框;将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,即新增的跟踪标识与同时消失的跟踪标识能够匹配上,则将新增的跟踪标识替换为已有的跟踪标识,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系。这样可以解决跟踪标识发生跳变时导致车辆轨迹消失的问题,有助于准确判断车辆运动方向。
本实施例还可以判断车辆是否处于停车状态,具体地,在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆处于待判定停车状态;在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;其中,K为大于1的整数。
以N为25为例,为车辆C分配跟踪标识D,在第一时刻,从第1帧到第25帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第一时刻的车辆轨迹,在第一时刻的车辆的轨迹的长度小于预设的第二阈值时,预设的停车判断计数器的值加1;在第二时刻,从第2帧到第26帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第二时刻的车辆轨迹,在第二时刻的车辆的轨迹的长度小于预设的第二阈值时,预设的停车判断计数器的值加1;在第三时刻,从第3帧到第27帧图像(最新的25帧图像)中分别识别出与跟踪标识D对应的25个车辆检测框,分别确定每个车辆检测框的预设位置点,将每个车辆检测框的预设位置点作为一个轨迹点,得到25个轨迹点,将这25个轨迹点连线,获得第三时刻的车辆轨迹,在第三时刻的车辆的轨迹的长度小于预设的第二阈值时,预设的停车判断计数器的值加1;依次类推;在预设的停车判断计数器的值大于或等于K时,判断车辆停车,在判断车辆停车后,可以将车辆的起始停车时间更新为当前 时刻。
本实施例中,根据车辆检测框确定车辆的朝向,首先判断车辆的朝向与车辆所在车道的行驶方向是否一致,在车辆的朝向与车辆所在车道的行驶方向一致时,才根据车辆的行驶方向判断车辆是否倒车,这样可以避免混淆车辆倒车与逆向行驶两种行为,能够提高倒车行为的识别准确率。
一些实施例中,所述第一处理模块22包括:
确定单元,用于确定所述车辆检测框的中心点所在的车道;
获取单元,用于获取所述车道的行驶方向,比对所述车道的行驶方向和所述车辆的朝向。
一些实施例中,所述第一处理模块22还包括:
朝向处理单元,用于将所述车辆检测框输入预先训练的车辆属性模型,将所述车辆检测框输入预先训练的车辆属性模型,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
一些实施例中,所述装置还包括:
训练模块,用于建立初始车辆属性模型;将多组训练数据输入所述初始车辆属性模型进行训练,得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
一些实施例中,所述第二处理模块23包括:
分配单元,用于为每一车辆分配一对应的跟踪标识;
轨迹获取单元,用于针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;将所述多个轨迹点进行连线,得到所述车辆的轨迹;
判断单元,用于在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;
倒车判断单元,用于在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;
其中,M,N为大于1的整数。
一些实施例中,所述第二处理模块还包括:
停车判断单元,用于在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆处于待判定停车状态;在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;
其中,K为大于1的整数。
一些实施例中,所述装置还包括:
匹配模块,用于识别当前帧道路监控图像中的车辆检测框;将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系。
本公开的实施例还提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如上所述的倒车检测方法的步骤。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述倒车检测方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现上述倒车检测方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者 装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (16)

  1. 一种倒车检测方法,其特征在于,包括:
    获取道路监控图像,识别所述道路监控图像中的车辆检测框;
    对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致;
    在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的车道的行驶方向不一致时,确定所述车辆逆向行驶。
  2. 根据权利要求1所述的倒车检测方法,其特征在于,所述判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一致包括:
    确定所述车辆检测框的中心点所在的车道;
    获取所述车道的行驶方向,比对所述车道的行驶方向和所述车辆的朝向。
  3. 根据权利要求1所述的倒车检测方法,其特征在于,所述根据所述车辆检测框确定车辆的朝向包括:
    将所述车辆检测框输入预先训练的车辆属性模型,将所述车辆检测框输入预先训练的车辆属性模型,输出所述车辆的车头或车尾,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
  4. 根据权利要求3所述的倒车检测方法,其特征在于,所述方法还包括训练所述车辆属性模型的步骤,所述训练所述车辆属性模型的步骤包括:
    建立初始车辆属性模型;
    将多组训练数据输入所述初始车辆属性模型进行训练,得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
  5. 根据权利要求1所述的倒车检测方法,其特征在于,所述根据所述车辆的行驶方向判断所述车辆是否倒车包括:
    为每一车辆分配一对应的跟踪标识;
    针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与 所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;
    将所述多个轨迹点进行连线,得到所述车辆的轨迹;
    在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;
    在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;
    其中,M,N为大于1的整数。
  6. 根据权利要求5所述的倒车检测方法,其特征在于,所述方法还包括:
    在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆处于待判定停车状态;
    在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;
    其中,K为大于1的整数。
  7. 根据权利要求5所述的倒车检测方法,其特征在于,N大于或等于20。
  8. 根据权利要求5所述的倒车检测方法,其特征在于,获取每帧图像中与所述跟踪标识对应的车辆检测框包括:
    识别当前帧道路监控图像中的车辆检测框;
    将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系。
  9. 一种倒车检测装置,其特征在于,包括:
    识别模块,用于获取道路监控图像,识别所述道路监控图像中的车辆检测框;
    第一处理模块,用于对于每一所述车辆检测框,根据所述车辆检测框确定车辆的朝向,判断所述车辆的朝向与所述车辆所在车道的行驶方向是否一 致;
    第二处理模块,用于在所述车辆的朝向与所述车辆所在车道的行驶方向一致时,根据所述车辆的行驶方向判断所述车辆是否倒车,在所述车辆的朝向与所述车辆所在的车道的行驶方向不一致时,确定所述车辆逆向行驶。
  10. 根据权利要求9所述的倒车检测装置,其特征在于,所述第一处理模块包括:
    确定单元,用于确定所述车辆检测框的中心点所在的车道;
    获取单元,用于获取所述车道的行驶方向,比对所述车道的行驶方向和所述车辆的朝向。
  11. 根据权利要求9所述的倒车检测装置,其特征在于,所述第一处理模块还包括:
    朝向处理单元,用于将所述车辆检测框输入预先训练的车辆属性模型,将所述车辆检测框输入预先训练的车辆属性模型,所述车辆属性模型输出的结果为车头和车尾中靠近摄像位置的一者。
  12. 根据权利要求11所述的倒车检测装置,其特征在于,所述装置还包括:
    训练模块,用于建立初始车辆属性模型;将多组训练数据输入所述初始车辆属性模型进行训练,得到所述车辆属性模型,每组所述训练数据包括车辆图像以及车头和车尾中靠近摄像位置的一者。
  13. 根据权利要求9所述的倒车检测装置,其特征在于,所述第二处理模块包括:
    分配单元,用于为每一车辆分配一对应的跟踪标识;
    轨迹获取单元,用于针对每一所述车辆对应的跟踪标识,获取最新的N帧道路监控图像中与所述跟踪标识对应的车辆检测框,将所述车辆检测框的预设位置点作为轨迹点,得到多个轨迹点;将所述多个轨迹点进行连线,得到所述车辆的轨迹;
    判断单元,用于在所述车辆的轨迹与所述车辆所在车道的行驶方向之间的夹角大于或等于预设的第一阈值,确定所述车辆处于待判定倒车状态;
    倒车判断单元,用于在至少连续M次确定所述车辆处于待判定倒车状态时,且所述车辆的轨迹的长度大于预设的第二阈值时,确定所述车辆倒车;
    其中,M,N为大于1的整数。
  14. 根据权利要求13所述的倒车检测装置,其特征在于,所述第二处理模块还包括:
    停车判断单元,用于在所述车辆的轨迹的长度小于或等于预设的第二阈值时,确定所述车辆处于待判定停车状态;在至少连续K次确定所述车辆处于待判定停车状态时,确定所述车辆停车;
    其中,K为大于1的整数。
  15. 根据权利要求13所述的倒车检测装置,其特征在于,所述装置还包括:
    匹配模块,用于识别当前帧道路监控图像中的车辆检测框;将所述车辆检测框与已有的跟踪标识对应的车辆检测框进行匹配,若所述车辆检测框与已有的跟踪标识对应的车辆检测框之间的匹配度大于预设的第三阈值,建立当前帧道路监控图像中的车辆检测框与已有的跟踪标识的对应关系。
  16. 一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-8任一项所述的倒车检测方法的步骤。
PCT/CN2023/121892 2022-11-11 2023-09-27 倒车检测方法及装置 WO2024098992A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211415757.8A CN115762153A (zh) 2022-11-11 2022-11-11 倒车检测方法及装置
CN202211415757.8 2022-11-11

Publications (1)

Publication Number Publication Date
WO2024098992A1 true WO2024098992A1 (zh) 2024-05-16

Family

ID=85370000

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/121892 WO2024098992A1 (zh) 2022-11-11 2023-09-27 倒车检测方法及装置

Country Status (2)

Country Link
CN (1) CN115762153A (zh)
WO (1) WO2024098992A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115762153A (zh) * 2022-11-11 2023-03-07 京东方科技集团股份有限公司 倒车检测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112530170A (zh) * 2020-12-16 2021-03-19 济南博观智能科技有限公司 一种车辆行驶状态检测方法、装置、电子设备及存储介质
CN113177509A (zh) * 2021-05-19 2021-07-27 浙江大华技术股份有限公司 一种倒车行为识别方法及装置
CN113903008A (zh) * 2021-10-26 2022-01-07 中远海运科技股份有限公司 基于深度学习和轨迹跟踪的匝道出口车辆违法识别方法
CN114049610A (zh) * 2021-12-02 2022-02-15 公安部交通管理科学研究所 一种高速公路上机动车倒车和逆行违法行为主动发现方法
CN115762153A (zh) * 2022-11-11 2023-03-07 京东方科技集团股份有限公司 倒车检测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112530170A (zh) * 2020-12-16 2021-03-19 济南博观智能科技有限公司 一种车辆行驶状态检测方法、装置、电子设备及存储介质
CN113177509A (zh) * 2021-05-19 2021-07-27 浙江大华技术股份有限公司 一种倒车行为识别方法及装置
CN113903008A (zh) * 2021-10-26 2022-01-07 中远海运科技股份有限公司 基于深度学习和轨迹跟踪的匝道出口车辆违法识别方法
CN114049610A (zh) * 2021-12-02 2022-02-15 公安部交通管理科学研究所 一种高速公路上机动车倒车和逆行违法行为主动发现方法
CN115762153A (zh) * 2022-11-11 2023-03-07 京东方科技集团股份有限公司 倒车检测方法及装置

Also Published As

Publication number Publication date
CN115762153A (zh) 2023-03-07

Similar Documents

Publication Publication Date Title
US10403138B2 (en) Traffic accident warning method and traffic accident warning apparatus
CN110146097B (zh) 自动驾驶导航地图的生成方法、系统、车载终端及服务器
CN109345829B (zh) 无人车的监控方法、装置、设备及存储介质
CN110532916B (zh) 一种运动轨迹确定方法及装置
CN111780987B (zh) 自动驾驶车辆的测试方法、装置、计算机设备和存储介质
CN110606093A (zh) 车辆性能评测方法、装置、设备和存储介质
CN111582189B (zh) 交通信号灯识别方法、装置、车载控制终端及机动车
WO2022227766A1 (zh) 交通异常检测的方法和装置
WO2021155685A1 (zh) 一种更新地图的方法、装置和设备
CN107909012B (zh) 一种基于视差图的实时车辆跟踪检测方法与装置
WO2024098992A1 (zh) 倒车检测方法及装置
CN109284801B (zh) 交通指示灯的状态识别方法、装置、电子设备及存储介质
CN108932849B (zh) 一种记录多台机动车低速行驶违法行为的方法及装置
JP2018501543A (ja) 現在存在する走行状況を特定するための方法及び装置
CN111967396A (zh) 障碍物检测的处理方法、装置、设备及存储介质
CN111967384A (zh) 车辆信息处理方法、装置、设备及计算机可读存储介质
US20210048819A1 (en) Apparatus and method for determining junction
JP2016194815A (ja) 地物画像認識システム、地物画像認識方法及びコンピュータプログラム
CN109344776B (zh) 数据处理方法
CN114264310A (zh) 定位及导航方法、装置、电子设备、计算机存储介质
CN115257771B (zh) 一种路口识别方法、电子设备和存储介质
Bhandari et al. Fullstop: A camera-assisted system for characterizing unsafe bus stopping
CN114693722B (zh) 一种车辆行驶行为检测方法、检测装置及检测设备
CN110660225A (zh) 闯红灯行为检测方法、装置和设备
CN113538968B (zh) 用于输出信息的方法和装置