CN113012439A - Vehicle detection method, device, equipment and storage medium - Google Patents

Vehicle detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113012439A
CN113012439A CN202110335726.0A CN202110335726A CN113012439A CN 113012439 A CN113012439 A CN 113012439A CN 202110335726 A CN202110335726 A CN 202110335726A CN 113012439 A CN113012439 A CN 113012439A
Authority
CN
China
Prior art keywords
vehicle
snapshot
current
license plate
result set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110335726.0A
Other languages
Chinese (zh)
Other versions
CN113012439B (en
Inventor
高治力
王召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110335726.0A priority Critical patent/CN113012439B/en
Publication of CN113012439A publication Critical patent/CN113012439A/en
Application granted granted Critical
Publication of CN113012439B publication Critical patent/CN113012439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The disclosure provides a vehicle detection method, a device, equipment and a storage medium, and relates to the field of image processing, in particular to the field of vehicle detection. The method comprises the following steps: acquiring a sending period and the maximum sending times for sending the snapshot of the vehicles in the same train number to the processing model; acquiring a current candidate snapshot of a target vehicle; determining a target snapshot image of the target vehicle according to the sending period, the maximum sending times and the current candidate snapshot image; and sending the target snapshot picture to the processing model for processing so as to acquire the vehicle information of the target vehicle.

Description

Vehicle detection method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a vehicle detection method, apparatus, device, and storage medium.
Background
In the existing vehicle detection scheme, a video frame sequence containing vehicle snapshot information is input into a vehicle detection algorithm model, so that a vehicle snapshot image is output, and then the vehicle snapshot image is sequentially sent into a license plate detection and identification module, a vehicle attribute detection module, a vehicle feature extraction module and a vehicle type identification module, so that corresponding vehicle information is obtained.
In the conventional method, because of the continuity of video frames, the same vehicle appears in a plurality of continuous video frames, the video frames are input into a vehicle detection algorithm model, and finally a series of snap shots of the same vehicle are output, so that a vehicle attribute detection module, a vehicle feature extraction module and a vehicle type identification module are called for the same vehicle for many times, and a large amount of computing resources such as a Graphics Processing Unit (GPU) and a Central Processing Unit (CPU) are consumed; in addition, a lot of storage resources such as a magnetic disk and a memory are consumed for a plurality of vehicle snap shots of the same vehicle and information such as extracted vehicle attributes (such as vehicle color, vehicle type, etc.), vehicle feature values, recognized vehicle types, etc.
Disclosure of Invention
The present disclosure provides a method, apparatus, device and storage medium for vehicle detection.
According to an aspect of the present disclosure, there is provided a vehicle detection method including:
acquiring a sending period and the maximum sending times for sending the snapshot of the vehicles in the same train number to the processing model;
acquiring a current candidate snapshot of a target vehicle;
determining a target snapshot image of the target vehicle according to the sending period, the maximum sending times and the current candidate snapshot image;
and sending the target snapshot picture to the processing model for processing so as to acquire the vehicle information of the target vehicle.
In an embodiment of the present disclosure, the method further includes: acquiring the time interval from the last sending of the snapshot of the target vehicle to the processing model at the current moment and the cumulative sending times of the snapshot of the target vehicle;
wherein the determining the target snapshot map of the target vehicle according to the transmission cycle, the maximum number of transmissions, and the current candidate snapshot map comprises:
and determining the current candidate snapshot images as target snapshot images under the condition that the time interval meets the sending period and the cumulative sending times of the snapshot images do not exceed the maximum sending times.
In an embodiment of the present disclosure, the method further includes: acquiring a detection range threshold and a detection confidence threshold;
wherein the obtaining of the current candidate snapshot of the target vehicle comprises:
acquiring a frame of target video frame, and determining the target video frame as a current video frame;
inputting the current video frame into a preset initial detection model to obtain at least one initial detection result of the current video frame;
determining a current vehicle result set and a current license plate result set corresponding to the current video frame according to the initial detection result which meets the detection range threshold and the detection confidence threshold in the at least one initial detection result;
determining a vehicle corresponding to any element in the current vehicle result set as the target vehicle under the condition that the current vehicle result set is not empty;
acquiring a historical vehicle result set and a historical license plate result set corresponding to a historical video frame;
and determining a current candidate snapshot of the target vehicle according to the current vehicle result set, the current license plate result set, the historical vehicle result set and the historical license plate result set.
In an embodiment of the present disclosure, the acquiring a frame of target video frame includes:
reading a frame of video coding data;
sending the video coding data to a decoder for decoding to obtain a first format video frame;
performing color space transformation on the first format video frame to obtain a second format video frame;
determining the second format video frame as the target video frame.
In an embodiment of the present disclosure, the initial detection result includes coordinates of a detection frame used for framing a snapshot object on the target video frame, a type of the snapshot object, and a detection result confidence, where the type of the snapshot object is a vehicle or a license plate;
determining a current vehicle result set and a current license plate result set corresponding to the current video frame according to the initial detection result which satisfies the detection range threshold and the detection confidence threshold in the at least one initial detection result, including:
and for each initial detection result, when the coordinate of a detection frame in the initial detection result is within a detection range limited by the detection range threshold and the detection result confidence in the initial detection result is greater than or equal to the detection confidence threshold, adding the initial detection result to the current vehicle result set according to the type of a snapshot object in the initial detection result as a vehicle, or adding the initial detection result to the current license plate result set according to the type of the snapshot object in the initial detection result as a license plate.
In an embodiment of the present disclosure, the method further includes: further comprising:
under the condition that the current vehicle result set and the current license plate result set are not empty, vehicle snapshot pictures and license plate snapshot pictures of vehicles in the same number in the current video frame are bound according to the coordinates of the detection frames in the current vehicle result set and the coordinates of the detection frames in the current license plate result set;
and inputting the license plate snapshot picture which is bound with the corresponding vehicle snapshot picture into a preset license plate recognition model to acquire corresponding license plate information.
In an embodiment of the present disclosure, the determining a current candidate snapshot of the target vehicle according to the current vehicle result set, the current license plate result set, the historical vehicle result set, and the historical license plate result set includes:
marking the vehicles of each train number corresponding to the elements in the current vehicle result set and the historical vehicle result set to obtain the vehicle identification ID of the vehicles of each train number;
determining the area of a corresponding vehicle snapshot according to the coordinates of each detection frame in the current vehicle result set and the historical vehicle result set;
acquiring license plate information corresponding to all vehicle IDs determined by the current license plate result set and the historical license plate result set;
obtaining a quality score of the vehicle snapshot image corresponding to each vehicle ID according to the area and the license plate information of the vehicle snapshot image corresponding to each vehicle ID, a preset vehicle size weight value and a preset license plate information weight value;
and determining the vehicle snapshot image with the highest quality score in the vehicle snapshot images corresponding to each vehicle ID as the current candidate snapshot image corresponding to each vehicle ID.
According to another aspect of the present disclosure, there is provided a vehicle detection apparatus including:
the first acquisition module is used for acquiring the sending period and the maximum sending times of sending the snapshot of the vehicles in the same train number to the processing model;
the second acquisition module is used for acquiring a current candidate snapshot of the target vehicle;
the snapshot determining module is used for determining a target snapshot of the target vehicle according to the sending period, the maximum sending times and the current candidate snapshot;
and the information acquisition module is used for sending the target snapshot image to the processing model for processing so as to acquire the vehicle information of the target vehicle.
According to another aspect of the present disclosure, there is also provided an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the embodiments of the present disclosure.
According to the method and the device for processing the vehicle information, the sending period and the maximum sending times of the snapshot of the vehicle in the same train number to the processing model are obtained, the current candidate snapshot of the target vehicle is obtained, the target snapshot of the target vehicle is determined according to the sending period, the maximum sending times and the current candidate snapshot, and then the target snapshot is sent to the processing model to be processed, so that the vehicle information of the target vehicle is obtained. Therefore, the number of the snap shots sent to the processing model by the vehicles in the same train number is greatly reduced, the calling times of the processing model is greatly reduced, and a large number of computing resources such as GPU and CPU are saved. Meanwhile, a large amount of intermediate processing data generated by processing the vehicle snapshot images by the processing model is greatly reduced, so that a large amount of storage resources for storing the vehicle snapshot images and the intermediate processing data are saved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow chart diagram of a vehicle detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of another vehicle detection method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of an example application of a vehicle detection method according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram illustrating a method for binding a vehicle and a license plate according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a vehicle detection device according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an electronic device for implementing a vehicle detection method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It can be understood that in the field of vehicle detection, due to the continuity of video frames, the same vehicle can appear in a plurality of continuous video frames, the video frames are input into a vehicle detection algorithm model, and finally a series of snap shots of the same vehicle can be output, so that a vehicle attribute detection module, a vehicle feature extraction module and a vehicle type identification module can be called for the same vehicle for many times, and a large amount of computing resources such as a GPU (graphic processing unit), a CPU (central processing unit) and the like are consumed; in addition, a lot of storage resources such as a magnetic disk and a memory are consumed for a plurality of vehicle snap shots of the same vehicle and information such as extracted vehicle attributes (such as vehicle color, vehicle type, etc.), vehicle feature values, recognized vehicle types, etc. The method and the device have the advantages that for the same vehicle in the same vehicle, before the snapshot of the vehicle in the same vehicle is sent to the processing model, the sending period and the maximum sending times of the snapshot of the sending vehicle to the processing model are set, the number of the snapshots of the vehicle in the same vehicle which is actually sent to the processing model is reduced, the actual calling times of the processing model are reduced, the number of subsequently generated intermediate processing data is reduced, and therefore a large amount of computing resources and storage resources can be saved.
Fig. 1 is a schematic flow chart of a vehicle detection method according to an embodiment of the disclosure, as shown in fig. 1, the method includes the following steps:
step 101, obtaining a sending period and a maximum sending frequency for sending the snapshot of the vehicles in the same train number to a processing model.
Wherein, the vehicles of the same train number refer to the same vehicle. The sending period refers to the time interval between two adjacent times of sending the vehicle snapshot image of the vehicle in the same train to the processing model. The maximum sending times is the set upper limit of the times for allowing the snapshot of the vehicles in the same train number to be sent to the processing model. The processing model may be a subsequent processing module that extracts vehicle-valid information from the vehicle snapshot.
In an embodiment of the present disclosure, the process model may include: the device comprises a vehicle attribute detection module, a vehicle characteristic extraction module and a vehicle type identification module. The vehicle attribute detection module includes, but is not limited to, detecting a vehicle color and a vehicle type.
It will be appreciated that by setting a transmission cycle for transmitting the snap shots of vehicles of the same vehicle rank to the processing model, rather than continuously transmitting the vehicle snap shots in the read continuous video frame data to the processing model, a time interval for pre-processing the vehicle snap shots before processing the vehicle snap shots by the processing model can be provided. The maximum sending times of sending the snapshot of the vehicle in the same train to the processing model is set, so that the quantity of sending the snapshot of the vehicle in the same train to the processing model can be directly reduced.
And 102, acquiring a current candidate snapshot of the target vehicle.
The target vehicle is understood to be a vehicle to be detected in any number of vehicle ranks. The current candidate snapshot can be understood as a candidate of a vehicle snapshot currently selected as the target vehicle to be sent to the processing model.
It can be understood that obtaining the current candidate snapshot of the target vehicle is equivalent to preprocessing a plurality of vehicle snapshots of the target vehicle to screen out a vehicle snapshot meeting the constraint condition.
In an embodiment of the disclosure, a quality score of each vehicle snapshot of the target vehicle may be established, and by comparing the quality scores of the current vehicle snapshot and the historical vehicle snapshots, the vehicle snapshot with the high quality score is used as the current candidate snapshot of the target vehicle.
And 103, determining a target snapshot image of the target vehicle according to the transmission period, the maximum transmission times and the current candidate snapshot image.
The target snapshot may be understood as a finally determined snapshot of the target vehicle currently to be sent to the processing model.
It will be appreciated that the number of vehicle snapshots transmitted to the processing model can be directly reduced by setting the transmission period and the maximum number of transmissions, while the number of vehicle snapshots transmitted to the processing model can be further reduced by determining that the current candidate snapshot is equivalent to screening out a vehicle snapshot from the plurality of vehicle snapshots of the target vehicle that meets the constraints, such as the highest quality score.
And 104, sending the target snapshot to a processing model for processing so as to acquire the vehicle information of the target vehicle.
It can be understood that, for vehicle detection, the determined target snapshot map already contains the most sufficient vehicle information of the target vehicle, and therefore, only the target snapshot map needs to be sent to the processing model for processing, and necessary vehicle information can be acquired.
It should be noted that the processing procedure of the target fear graph by the processing model may be the same as or different from the normal vehicle snapshot graph processing procedure, and is not limited herein.
According to the method and the device for processing the snapshot, the sending period and the maximum sending times of the snapshot of the vehicle in the same train number to the processing model are obtained, the current candidate snapshot of the target vehicle is obtained, the target snapshot of the target vehicle is determined according to the sending period, the maximum sending times and the current candidate snapshot, and then the target snapshot is sent to the processing model to be processed, so that the vehicle information of the target vehicle is obtained. Therefore, the number of the snap shots sent to the processing model by the vehicles in the same train number is greatly reduced, the calling times of the processing model is greatly reduced, and a large number of computing resources such as GPU and CPU are saved. Meanwhile, a large amount of intermediate processing data generated by processing the vehicle snapshot images by the processing model is greatly reduced, so that a large amount of storage resources for storing the vehicle snapshot images and the intermediate processing data are saved.
Fig. 2 is a schematic flow chart of another vehicle detection method according to an embodiment of the disclosure, as shown in fig. 2, the method includes the following steps:
step 201, obtaining the sending period and the maximum sending times for sending the snapshot of the vehicle of the same train number to the processing model.
Step 202, obtaining the time interval from the last sending of the snapshot of the target vehicle to the processing model at the current moment and the cumulative sending times of the snapshot of the target vehicle.
The accumulated transmission times of the snapshot may be understood as the accumulated transmission times from the first transmission of the snapshot of the target vehicle to the current time of the processing model.
In an embodiment of the disclosure, the time interval from the last snapshot of the target vehicle to the processing model at the current moment may be determined by obtaining the current system time and the historical system time of the snapshot of the target vehicle to the processing model at the last time, and calculating the time difference between the current system time and the historical system time.
It can be understood that the time interval from the last snapshot of the vehicle to the processing model at the current time is obtained mainly for determining whether the time interval satisfies the sending period from the snapshot of the vehicle in the same train to the processing model. And whether the maximum sending times corresponding to the target vehicle are exceeded or not can be judged by acquiring the accumulated sending times of the snapshot of the target vehicle.
Step 203, obtaining a detection range threshold and a detection confidence threshold.
The detection range threshold is a threshold set for determining an effective detection range when a read video frame is preliminarily detected. The detection confidence threshold is a threshold used for judging whether the detection result is credible.
In an embodiment of the present disclosure, a rectangular image range with a certain distance (assuming that the distance is d) from the image edge may be set as the effective detection range.
In an embodiment of the present disclosure, the detection confidence threshold is set to conf, where 0< conf < ═ 1.0.
It can be understood that the explicit incomplete vehicle snapshot and/or license plate snapshot located at the edge of the video frame can be eliminated by setting the detection range threshold, so that the subsequent processing amount of invalid data is reduced, and the effect of reducing the subsequent processing amount is also achieved. And the detection confidence threshold value is set, so that the credibility of the primary detection result in the algorithm level can be judged, the unreliable data can be eliminated, and the effect of reducing the subsequent processing amount is also achieved.
Step 204, a frame of target video frame is obtained, and the target video frame is determined as the current video frame.
In an embodiment of the present disclosure, step 204 may be further implemented by steps 2041 to 2044 as follows:
step 2041, a frame of video encoded data is read.
Step 2042, the video coded data is sent to a decoder for decoding to obtain a video frame in the first format.
In an embodiment of the present disclosure, the first format may be a YUV format.
Step 2043, the first format video frame is color space transformed to obtain a second format video frame.
In an embodiment of the present disclosure, the second format may be an ARGB format or a BGRA format.
It can be understood that, since the deep learning framework generally requires frame data in the ARGB format or the BGRA format, the video frame in the YUV format output in the previous step needs to be sent to the color space transformation module, and converted into a video frame in the ARGB format or the BGRA format.
Step 2044, determine the second format video frame as the target video frame.
Step 205, inputting the current video frame into a preset initial detection model to obtain at least one initial detection result of the current video frame.
In an embodiment of the disclosure, the initial detection model is a vehicle collection license plate detection model, and is used for detecting whether a current video frame includes a vehicle snapshot image and/or a license plate snapshot image.
In an embodiment of the present disclosure, the initial detection result includes coordinates of a detection frame for framing a snap-shot object on the target video frame, a type of the snap-shot object, and a detection result confidence, where the type of the snap-shot object is a vehicle or a license plate.
And step 206, determining a current vehicle result set and a current license plate result set corresponding to the current video frame according to the initial detection result which meets the detection range threshold and the detection confidence threshold in the at least one initial detection result.
The current vehicle result set is used for storing initial detection results of the vehicle types corresponding to the vehicle numbers and contained in the current video frame, and the initial detection results of the license plate types corresponding to the vehicle numbers and contained in the current license plate result set.
In an embodiment of the disclosure, for each initial detection result, when a coordinate of a detection frame in the initial detection result is within a detection range defined by a detection range threshold and a detection result confidence in the initial detection result is greater than or equal to a detection confidence threshold, the initial detection result is added to a current vehicle result set according to that a type of a snapshot object in the initial detection result is a vehicle, or the initial detection result is added to the current license plate result set according to that the type of the snapshot object in the initial detection result is a license plate.
It can be understood that the snap-shot images in the video frame can be further distinguished into vehicle snap-shot images and license plate snap-shot images by classifying the initial detection results according to the types of the vehicles and the license plates. Therefore, the information division of the snapshot image is more accurate, and the subsequent further processing is facilitated.
And step 207, under the condition that the current vehicle result set is not empty, determining the vehicle corresponding to any element in the current vehicle result set as the target vehicle.
It can be understood that the current video frame may contain snap shots of vehicles in a plurality of vehicle numbers, and the current vehicle result set is not empty, which indicates that there is a valid available snap shot in the current video frame, so that a vehicle corresponding to any element in the current vehicle result set is determined as a target vehicle, and subsequent processing steps are performed at one time.
And step 208, under the condition that the current vehicle result set and the current license plate result set are not empty, binding the vehicle snapshot image and the license plate snapshot image of the vehicle in the same pass in the current video frame according to the coordinates of the detection frame in the current vehicle result set and the coordinates of the detection frame in the current license plate result set.
In an embodiment of the disclosure, the coordinates of the detection frame in the current license plate result set can determine the area where the vehicle snapshot image is located, the coordinates of the detection frame in the current vehicle result set can determine the area where the license plate snapshot image is located, and whether the area where the license plate snapshot image is located is included in the area where the vehicle snapshot image is located is determined, if yes, the vehicle snapshot image and the license plate snapshot image corresponding to the vehicle in the same vehicle number are determined, so that the vehicle snapshot image and the license plate snapshot image of the vehicle in the same vehicle number in the current video frame are bound.
And step 209, inputting the license plate snapshot image which is bound with the corresponding vehicle snapshot image into a preset license plate recognition model to acquire corresponding license plate information.
The license plate recognition model is used for recognizing the license plate snapshot image so as to determine specific license plate information.
And step 210, acquiring a historical vehicle result set and a historical license plate result set corresponding to the historical video frame.
The historical vehicle result set and the historical license plate result set can be understood as a vehicle result set and a license plate result set which are determined after initial detection is carried out on the historical target video frame.
In an embodiment of the present disclosure, the historical target video frame may be a previous target video frame of the target video frame corresponding to the current video frame, or a plurality of previous target video frames.
And step 211, determining a current candidate snapshot of the target vehicle according to the current vehicle result set, the current license plate result set, the historical vehicle result set and the historical license plate result set.
In an embodiment of the present disclosure, step 211 may include the following steps 2111 to 2115:
step 2111, marking the vehicles of each train number corresponding to the elements in the current vehicle result set and the historical vehicle result set to obtain the vehicle identification ID of the vehicles of each train number.
It can be understood that by marking the identity ID for each vehicle in the vehicle number, the snapshot map and the license plate information of each vehicle in the vehicle number can be correspondingly stored, which facilitates subsequent further processing.
And step 2112, determining the area of the corresponding vehicle snapshot according to the coordinates of each detection frame in the current vehicle result set and the historical vehicle result set.
It can be understood that the coordinates of the detection frame can be used for determining the area size of the snapshot image besides determining the positions of the vehicle snapshot image and the license plate snapshot image in the video frame, and the area size of the snapshot image can be used as a quality score index of the snapshot image.
And step 2113, acquiring license plate information corresponding to all the vehicle IDs determined by the current license plate result set and the historical license plate result set.
It can be understood that the current license plate result set does not necessarily contain the license plate snapshot images of all vehicles in the next train, so that the license plate information corresponding to all vehicle IDs needs to be determined according to the current license plate result set and the historical license plate result set.
In an embodiment of the present disclosure, the license plate snapshot corresponding to all the vehicle IDs is sent to the license plate recognition model to recognize specific license plate information.
In an embodiment of the present disclosure, the license plate Recognition model may be an Optical Character Recognition (OCR) model.
Step 2114, obtaining the quality score of the vehicle snapshot corresponding to each vehicle ID according to the area and the license plate information of the vehicle snapshot corresponding to each vehicle ID, and the preset vehicle size weight value and the preset license plate information weight value.
The vehicle size weight value represents a weight value corresponding to the area of the vehicle snapshot image, and the license plate information weight represents a weight corresponding to the identified license plate information.
In an embodiment of the present disclosure, the weight matching is performed according to the total number of clearly recognizable symbols in the total number of characters, letters, and numbers of the recognized license plate.
Step 2115, determining the vehicle snapshot with the highest quality score in the vehicle snapshot corresponding to each vehicle ID as the current candidate snapshot corresponding to each vehicle ID.
It can be understood that the vehicle snapshot images corresponding to each vehicle ID are subjected to quality scoring, and the vehicle snapshot image with the highest quality score is determined as the current candidate snapshot image corresponding to the vehicle ID, so that the vehicle snapshot images corresponding to each vehicle ID are further screened, and the quantity of the vehicle snapshot images to be processed is sent to the processing model.
And step 212, determining the current candidate snapshot images as the target snapshot images under the condition that the time interval meets the sending period and the cumulative sending times of the snapshot images do not exceed the maximum sending times.
It can be understood that, in this step, mainly for determining whether the current time meets two constraint conditions, namely, a set transmission cycle and a maximum number of times of transmission, the current candidate snapshot is determined as the target snapshot under the condition that the two constraint conditions are met at the same time, so that the number of the snapshot of the target vehicle actually sent to the processing model is greatly reduced.
And step 213, sending the target snapshot to the processing model for processing so as to acquire the vehicle information of the target vehicle.
According to the method, the number of the snap shots sent to the processing model by the vehicles in the same train number is greatly reduced, so that the number of calling times of the processing model is greatly reduced, and a large number of computing resources such as a GPU (graphics processing unit), a CPU (central processing unit) and the like are saved. Meanwhile, a large amount of intermediate processing data generated by processing the vehicle snapshot images by the processing model is greatly reduced, so that a large amount of storage resources for storing the vehicle snapshot images and the intermediate processing data are saved.
Fig. 3 is a flowchart illustrating an application example of a vehicle detection method according to an embodiment of the disclosure, as shown in fig. 3, the method includes the following steps:
step 301, a rectangular image range with a certain distance (assuming that the distance is d) from the image edge is set as an effective detection range.
Step 302, setting a time interval at which a vehicle snapshot map of a single vehicle (the single vehicle is a stage at which a vehicle appears to disappear in the video) and license plate information (corresponding to a certain vehicle snapshot map, and the license plate information may or may not be present) in the video are input to a processing model (a vehicle attribute detection algorithm module, a vehicle feature extraction algorithm module, and a vehicle type identification algorithm module), assuming that the time interval is t, wherein t > is 0.
Step 303, setting the number of times that the vehicle information (vehicle snapshot map and license plate information) of the single vehicle in the video is input to the processing model (vehicle attribute detection algorithm module, vehicle feature extraction algorithm module and vehicle type identification algorithm module) at most, wherein the assumed number of times is m, wherein m > is 1, and m is a positive integer.
Step 304, setting confidence threshold values of the vehicle and license plate detection algorithm, and assuming that the confidence threshold values are conf, wherein 0< conf < ═ 1.0.
Step 305, receiving a real-time video stream, or opening an offline video file.
Step 306, reading a frame of video encoding data.
Step 307, the read video frame is sent to a decoder for decoding.
And 308, sending the video frame in the YUV format output in the previous step into a color space conversion module, and converting the video frame into a video frame in an ARGB format or a BGRA format.
And 309, sending the video frames in the ARGB format or the BGRA format into a vehicle and license plate detection algorithm to detect the vehicle and the license plate.
Step 310, if the vehicle and license plate detection algorithm does not output the detection result, directly entering step 306; if the vehicle and license plate detection algorithm module outputs the detection results, the confidence threshold judgment is carried out on the detection results (each detection result mainly comprises 3 parts of contents, namely the coordinates of the detection frame, the type of the detection frame and the confidence c of the detection result).
Step 311, checking the confidence of each detection result in turn, and if c > -conf, retaining the detection result; if c < conf, this detection result is discarded.
Step 312, if there is no detection result satisfying the confidence threshold limit, step 306 is entered; if there are detection results that satisfy the confidence threshold limit, step 313 is entered.
313, judging whether the detection result obtained in the previous step is in an effective detection range; if not, discarding; and if the signal is within the detection range, the signal is retained. After the judgment, if the detection result is not obtained, the step 306 is executed; if there are still detection results, step 314 is entered.
And step 314, classifying the detection result obtained in the last step according to the type of the detection frame (namely the type of the snapshot object framed and selected by the detection frame, including the type of the vehicle and the type of the license plate). And (4) putting the detection result of the vehicle type into a queue A, and putting the detection result of the license plate type into a queue B.
Step 315, judging whether the number of the detection results of the queue A is 0, if so, entering step 306; if the number of queue A tests is greater than 0, then proceed to step 316.
Step 316, judging whether the number of the detection results of the queue B is 0, if so, entering step 318; and if the number of the detection results in the queue B is more than 0, performing the operation of binding the vehicle and the license plate.
In an embodiment of the present disclosure, as shown in fig. 4, the method for binding a vehicle and a license plate includes the following steps 401 to 408:
in step 401, the number of elements in queue a is calculated, assuming that Ca is obtained.
At step 402, the first element in queue B is marked bj.
At step 403, mark the first element in queue A as ai.
Step 404, judging whether the element bj (license plate coordinate frame) is positioned inside the element ai (vehicle coordinate frame); if bj is not inside ai, recalculating Ca (Ca ═ Ca-1) and determining whether Ca is 0; if Ca is not 0, re-marking the next element of ai elements in the A queue as ai elements, and entering step 404 again; if Ca is 0, then remove the element marked bj from queue B, then proceed to step 408; if bj is inside ai, the bj license plate is considered to belong to the ai vehicle, binding of the bj license plate and the ai vehicle is completed, and then the step 405 is executed.
At step 405, the ai element is removed from queue A.
At step 406, the bj element is removed from queue B.
Step 407, judging whether there are any elements in the queue A; if no element exists in the queue A, the operation of binding the vehicle and the license plate is quitted; if there are more elements in queue A, then step 408 is entered.
Step 408, judging whether there are elements in the queue B; if no element exists in the queue B, the operation of binding the vehicle and the license plate is quitted; if there are more elements in queue B, then step 401 is entered.
The binding of the vehicle and the license plate corresponding to the vehicle of the same vehicle rank can be realized by the vehicle and license plate binding method shown in fig. 4.
Step 317, outputting a series of vehicle snapshot images corresponding to the previous step, and judging whether a license plate snapshot image which is bound with a corresponding finished vehicle exists or not; if not, go to step 318; if yes, sending the license plate snapshot image to a license plate recognition algorithm (OCR) module to obtain license plate information corresponding to the license plate snapshot image.
In one embodiment of the present disclosure, the license plate information is usually composed of a plurality of numbers, letters or characters, and the license plate recognition algorithm can output accurate numbers, letters or characters (e.g., "jing a 1111") for the clearer part and replace the accurate numbers with numbers (e.g., "jing A1 × 1") for the unclear part.
Step 318, a series of vehicle snapshots in the frame are sent to a vehicle tracking algorithm module.
In an embodiment of the present disclosure, if the current frame is the first frame, the vehicle tracking module directly marks the id of the vehicle snapshot, assuming that the id of the vehicle snapshot is id 1; if the current frame is not the first frame, the vehicle tracking module looks for a matching vehicle in a particular region of the previous video frame image. If a matching face vehicle exists and the id of the matching vehicle is id2, the id of the currently input vehicle snapshot is also id 2; if there is no matching vehicle, the vehicle tracking module marks the id of this incoming vehicle snapshot as id 3. Wherein id1, id2 and id3 are natural numbers, and id1, id2 and id3 are all different, namely, the id of each train number is guaranteed to be different.
And 319, using the map as a storage structure of the vehicle snapshot map, using key to store the id of the vehicle, and using value to store the vehicle snapshot map, the corresponding license plate information and the corresponding quality score qulity.
In one embodiment of the present disclosure, the quality is calculated as follows:
1) two weight values are assumed to be respectively a vehicle size weight value ws and a license plate information weight value wp, wherein ws and wp are floating point numbers between 0 and 1, and ws + wp is 1;
2) assuming that the area value of the vehicle snapshot is s, a one-to-one mapping relation between different area sizes s1, s2, … and s10 and fractions Qs (assuming that the fractions corresponding to the areas are 10, 20, … and 100) is established. Supposing that Sx is provided, wherein S1< ═ Sx < S2, then taking Qs as 10; if Sx > s10, then take Qs to 100.
3) Assuming that the total number of characters, letters and numbers of the license plate is n, a one-to-one mapping relationship between the total number of clearly recognizable symbols (1,2, …, n) and the fraction Qp (assuming that the corresponding fractions are 100/n, 100 x 2/n, …,100) is established. The quality is calculated in the following way: quality + Qs + Qp wp.
In an embodiment of the present disclosure, assuming that the map structure variable storing the snapshot is M, there are two cases as follows for the vehicle snapshot generated in the current frame:
1) when the id of the snapshot is not in the M, storing the id of the snapshot, the corresponding vehicle snapshot and the corresponding (if any) license plate information thereof, the sending times M1 of the vehicle snapshot, the current system time tc and the calculated vehicle snapshot quality (assuming that the value is q) into the M; wherein m1 is 0;
2) when the id of the snapshot map exists in M, firstly, judging whether the sent times M1 of the current vehicle id is more than or equal to M;
if m1> -m, go to step 321;
if m1< m, then the quality of this vehicle snapshot is calculated (assuming its value is q 1):
if the vehicle snapshot corresponding to the vehicle id is already empty, taking the current vehicle snapshot and the license plate information thereof as the vehicle snapshot corresponding to the vehicle id and the license plate information thereof, and making q equal to q1, and then going to step 320;
if q1< ═ q, then no snapshot map update is performed, then go to step 320;
if q1> q, replace the previous vehicle snapshot with the current vehicle snapshot and let q equal q1, then jump to step 320.
Step 320, obtaining the current system time ts, and then sequentially obtaining tc of each vehicle id in M:
if ts-tc < t, go to step 321;
if ts-tc > -t, sending the corresponding vehicle snapshot image to a processing model (a vehicle attribute detection module, a vehicle feature extraction module and a vehicle type identification module), and if corresponding license plate information exists, sending the license plate information to a back-end processing module for displaying or storing; and then deleting the snapshot and the license plate information of the sent vehicle from the M, and updating the corresponding tc, so that tc is ts.
Step 321, judging whether the vehicle in M disappears in the current frame;
if all the vehicles in M do not disappear in the current frame, jumping to step 306;
if some vehicles in M disappear from the current frame, traversing to check whether vehicle snapshot images corresponding to the disappeared vehicles are stored:
if the vehicle snapshot image is stored, sending the corresponding vehicle snapshot image and the license plate information to the processing model, then deleting all information of the disappeared vehicle, including id, the sending times m1 of the vehicle snapshot image, the accumulated starting time tc of the vehicle snapshot image, the mass fraction q of the vehicle snapshot image, the vehicle snapshot image and the license plate information, and then jumping to step 306;
if the vehicle snapshot map and the license plate information are deleted, all information of the disappeared vehicle, including id, the sending times m1 of the vehicle snapshot map, the accumulated starting time tc of the vehicle snapshot map and the mass fraction q of the vehicle snapshot map, is deleted, and then the step 306 is skipped.
Through the series of steps, a series of complete information of the vehicle can be obtained, wherein the complete information comprises a vehicle snapshot picture, license plate information, vehicle attribute information (including vehicle color, whether safety belts are fastened by personnel in the vehicle, and the like), vehicle characteristic information and vehicle model information, and therefore vehicle structurization can be achieved.
Compared with the traditional vehicle detection scheme, the scheme of the embodiment of the disclosure can mark the same id for the vehicle snapshot icons of the same train number by adding the vehicle tracking module; by modifying a vehicle detection algorithm (single classification algorithm) into a vehicle and license plate detection algorithm (multi-classification algorithm), image detection is carried out for one time, and a vehicle snapshot image and a license plate snapshot image can be obtained simultaneously; by providing a calculation method of the quality of the snap shots, one or more snap shots with better quality can be selected by comparing the quality of the same id vehicle; through the set sending times and sending intervals, the vehicle snapshot image is sent only when the sending times and the sending intervals are met simultaneously, and the calling of subsequent modules is reduced; and for the vehicles which do not meet the sending times and disappear when the sending interval time is not exceeded, sending the vehicle snapshot image and the corresponding license plate information when the vehicles disappear. Therefore, the number of the snap shots sent to the processing model by the vehicles in the same train number is greatly reduced, the calling times of the processing model is greatly reduced, and a large number of computing resources such as GPU and CPU are saved. Meanwhile, a large amount of intermediate processing data generated by processing the vehicle snapshot images by the processing model is greatly reduced, so that a large amount of storage resources for storing the vehicle snapshot images and the intermediate processing data are saved.
Fig. 5 is a schematic structural diagram of a vehicle detection device according to an embodiment of the present disclosure, as shown in fig. 5, the device includes:
the first obtaining module 501 is configured to obtain a sending period and a maximum sending frequency for sending the snapshot of the vehicle in the same train number to the processing model;
a second obtaining module 502, configured to obtain a current candidate snapshot of the target vehicle;
a snapshot determining module 503, configured to determine a target snapshot of the target vehicle according to the sending period, the maximum sending times, and the current candidate snapshot;
and the information acquisition module 504 is configured to send the target snapshot to the processing model for processing, so as to acquire vehicle information of the target vehicle.
In an embodiment of the present disclosure, the apparatus further includes: the third acquisition module is used for acquiring the time interval from the last sending of the snapshot of the target vehicle to the processing model at the current moment and the cumulative sending times of the snapshot of the target vehicle;
the snapshot map determining module 503 includes:
and the snapshot image determining unit is used for determining the current candidate snapshot image as the target snapshot image under the condition that the time interval meets the sending period and the cumulative sending times of the snapshot images do not exceed the maximum sending times.
In an embodiment of the present disclosure, the apparatus further includes: the fourth acquisition module is used for acquiring a detection range threshold and a detection confidence threshold;
the second obtaining module 502 includes:
the video frame acquisition unit is used for acquiring a frame of target video frame and determining the target video frame as a current video frame;
the initial detection unit is used for inputting the current video frame into a preset initial detection model so as to obtain at least one initial detection result of the current video frame;
a current result determining unit, configured to determine a current vehicle result set and a current license plate result set corresponding to a current video frame according to an initial detection result that satisfies a detection range threshold and a detection confidence threshold among at least one initial detection result;
the target vehicle determining unit is used for determining a vehicle corresponding to any element in the current vehicle result set as a target vehicle under the condition that the current vehicle result set is not empty;
the history acquisition unit is used for acquiring a history vehicle result set and a history license plate result set corresponding to the history video frame;
and the candidate determining unit is used for determining a current candidate snapshot of the target vehicle according to the current vehicle result set, the current license plate result set, the historical vehicle result set and the historical license plate result set.
In an embodiment of the present disclosure, the video frame acquisition unit includes:
the video reading subunit is used for reading one frame of video coding data;
the decoding subunit is used for sending the video coding data to a decoder for decoding to obtain a first format video frame;
the conversion subunit is used for carrying out color space conversion on the first format video frame to obtain a second format video frame;
and the target determining subunit is used for determining the second format video frame as the target video frame.
In an embodiment of the present disclosure, the initial detection result includes coordinates of a detection frame used for framing a snapshot object on a target video frame, a type of the snapshot object, and a detection result confidence, where the type of the snapshot object is a vehicle or a license plate;
a current result determination unit comprising:
and the result adding subunit is used for adding the initial detection result to the current vehicle result set according to the fact that the type of the snapshot object in the initial detection result is a vehicle or according to the fact that the type of the snapshot object in the initial detection result is a license plate when the coordinate of the detection frame in the initial detection result is within the detection range limited by the detection range threshold and the detection result confidence in the initial detection result is greater than or equal to the detection confidence threshold for each initial detection result.
In an embodiment of the present disclosure, the apparatus further includes:
the vehicle license plate binding module is used for binding a vehicle snapshot image and a license plate snapshot image of a vehicle of the same number in a current video frame according to the coordinates of the detection frame in the current vehicle result set and the coordinates of the detection frame in the current license plate result set under the condition that the current vehicle result set and the current license plate result set are not empty;
and the license plate information acquisition module is used for inputting the license plate snapshot image which is bound with the corresponding vehicle snapshot image into a preset license plate recognition model so as to acquire the corresponding license plate information.
In an embodiment of the present disclosure, the candidate determining unit includes:
the identity marking subunit is used for marking the vehicles of each train number corresponding to the elements in the current vehicle result set and the historical vehicle result set so as to obtain the vehicle identity ID of the vehicles of each train number;
the area determining subunit is used for determining the area of the corresponding vehicle snapshot image according to the coordinates of each detection frame in the current vehicle result set and the historical vehicle result set;
the license plate obtaining subunit is used for obtaining license plate information corresponding to all the vehicle IDs determined by the current license plate result set and the historical license plate result set;
the quality scoring subunit is used for obtaining the quality score of the vehicle snapshot image corresponding to each vehicle ID according to the area and the license plate information of the vehicle snapshot image corresponding to each vehicle ID, and a preset vehicle size weight value and a preset license plate information weight value;
and the candidate determining subunit is used for determining the vehicle snapshot with the highest quality score in the vehicle snapshot corresponding to each vehicle ID as the current candidate snapshot corresponding to each vehicle ID.
The apparatus provided in the embodiment of the present disclosure can implement all the method steps implemented by the foregoing method embodiment, and can achieve the same technical effect, and details of the same parts and beneficial effects as those of the method embodiment in this embodiment are not repeated herein.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic apparatus 600 includes a calculation unit 601 that can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 602 or a computer program loaded from a storage unit 608 into a RAM (Random Access Memory) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An I/O (Input/Output) interface 605 is also connected to the bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing Unit 601 include, but are not limited to, a CPU (Central Processing Unit), a GPU (graphics Processing Unit), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 601 executes the respective methods and processes described above, such as the vehicle detection method. For example, in some embodiments, the vehicle detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the vehicle detection method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the vehicle detection method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, System On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (11)

1. A vehicle detection method, comprising:
acquiring a sending period and the maximum sending times for sending the snapshot of the vehicles in the same train number to the processing model;
acquiring a current candidate snapshot of a target vehicle;
determining a target snapshot image of the target vehicle according to the sending period, the maximum sending times and the current candidate snapshot image;
and sending the target snapshot picture to the processing model for processing so as to acquire the vehicle information of the target vehicle.
2. The method of claim 1, further comprising: acquiring the time interval from the last sending of the snapshot of the target vehicle to the processing model at the current moment and the cumulative sending times of the snapshot of the target vehicle;
wherein the determining the target snapshot map of the target vehicle according to the transmission cycle, the maximum number of transmissions, and the current candidate snapshot map comprises:
and determining the current candidate snapshot images as target snapshot images under the condition that the time interval meets the sending period and the cumulative sending times of the snapshot images do not exceed the maximum sending times.
3. The method of claim 1, further comprising: acquiring a detection range threshold and a detection confidence threshold;
wherein the obtaining of the current candidate snapshot of the target vehicle comprises:
acquiring a frame of target video frame, and determining the target video frame as a current video frame;
inputting the current video frame into a preset initial detection model to obtain at least one initial detection result of the current video frame;
determining a current vehicle result set and a current license plate result set corresponding to the current video frame according to the initial detection result which meets the detection range threshold and the detection confidence threshold in the at least one initial detection result;
determining a vehicle corresponding to any element in the current vehicle result set as the target vehicle under the condition that the current vehicle result set is not empty;
acquiring a historical vehicle result set and a historical license plate result set corresponding to a historical video frame;
and determining a current candidate snapshot of the target vehicle according to the current vehicle result set, the current license plate result set, the historical vehicle result set and the historical license plate result set.
4. The method of claim 3, wherein said obtaining a frame of target video frames comprises:
reading a frame of video coding data;
sending the video coding data to a decoder for decoding to obtain a first format video frame;
performing color space transformation on the first format video frame to obtain a second format video frame;
determining the second format video frame as the target video frame.
5. The method according to claim 3, wherein the initial detection result comprises coordinates of a detection box for framing a snap-shot object on the target video frame, a type of the snap-shot object and a detection result confidence, wherein the type of the snap-shot object is a vehicle or a license plate;
determining a current vehicle result set and a current license plate result set corresponding to the current video frame according to the initial detection result which satisfies the detection range threshold and the detection confidence threshold in the at least one initial detection result, including:
and for each initial detection result, when the coordinate of a detection frame in the initial detection result is within a detection range limited by the detection range threshold and the detection result confidence in the initial detection result is greater than or equal to the detection confidence threshold, adding the initial detection result to the current vehicle result set according to the type of a snapshot object in the initial detection result as a vehicle, or adding the initial detection result to the current license plate result set according to the type of the snapshot object in the initial detection result as a license plate.
6. The method of claim 3, further comprising:
under the condition that the current vehicle result set and the current license plate result set are not empty, vehicle snapshot pictures and license plate snapshot pictures of vehicles in the same number in the current video frame are bound according to the coordinates of the detection frames in the current vehicle result set and the coordinates of the detection frames in the current license plate result set;
and inputting the license plate snapshot picture which is bound with the corresponding vehicle snapshot picture into a preset license plate recognition model to acquire corresponding license plate information.
7. The method of claim 6, wherein said determining a current candidate snapshot of the target vehicle from the current vehicle result set, the current license plate result set, the historical vehicle result set, and the historical license plate result set comprises:
marking the vehicles of each train number corresponding to the elements in the current vehicle result set and the historical vehicle result set to obtain the vehicle identification ID of the vehicles of each train number;
determining the area of a corresponding vehicle snapshot according to the coordinates of each detection frame in the current vehicle result set and the historical vehicle result set;
acquiring license plate information corresponding to all vehicle IDs determined by the current license plate result set and the historical license plate result set;
obtaining a quality score of the vehicle snapshot image corresponding to each vehicle ID according to the area and the license plate information of the vehicle snapshot image corresponding to each vehicle ID, a preset vehicle size weight value and a preset license plate information weight value;
and determining the vehicle snapshot image with the highest quality score in the vehicle snapshot images corresponding to each vehicle ID as the current candidate snapshot image corresponding to each vehicle ID.
8. A vehicle detection device comprising:
the first acquisition module is used for acquiring the sending period and the maximum sending times of sending the snapshot of the vehicles in the same train number to the processing model;
the second acquisition module is used for acquiring a current candidate snapshot of the target vehicle;
the snapshot determining module is used for determining a target snapshot of the target vehicle according to the sending period, the maximum sending times and the current candidate snapshot;
and the information acquisition module is used for sending the target snapshot image to the processing model for processing so as to acquire the vehicle information of the target vehicle.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110335726.0A 2021-03-29 2021-03-29 Vehicle detection method, device, equipment and storage medium Active CN113012439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110335726.0A CN113012439B (en) 2021-03-29 2021-03-29 Vehicle detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110335726.0A CN113012439B (en) 2021-03-29 2021-03-29 Vehicle detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113012439A true CN113012439A (en) 2021-06-22
CN113012439B CN113012439B (en) 2022-06-21

Family

ID=76408941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110335726.0A Active CN113012439B (en) 2021-03-29 2021-03-29 Vehicle detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113012439B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040167861A1 (en) * 2003-02-21 2004-08-26 Hedley Jay E. Electronic toll management
CN102708685A (en) * 2012-04-27 2012-10-03 南京航空航天大学 Device and method for detecting and snapshotting violation vehicles
CN105632171A (en) * 2015-12-29 2016-06-01 安徽海兴泰瑞智能科技有限公司 Traffic road condition video monitoring method
CN106682644A (en) * 2017-01-11 2017-05-17 同观科技(深圳)有限公司 Double dynamic vehicle monitoring management system and method based on mobile vedio shooting device
CN107871011A (en) * 2017-11-21 2018-04-03 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN108550264A (en) * 2018-06-22 2018-09-18 泉州创先力智能科技有限公司 A kind of road monitoring method, device, equipment and storage medium
CN109147341A (en) * 2018-09-14 2019-01-04 杭州数梦工场科技有限公司 Violation vehicle detection method and device
CN109254833A (en) * 2017-07-12 2019-01-22 杭州海康威视数字技术股份有限公司 Picture analyzing method, apparatus and system, computer equipment
US10475338B1 (en) * 2018-09-27 2019-11-12 Melodie Noel Monitoring and reporting traffic information
CN110503042A (en) * 2019-08-23 2019-11-26 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN111191481A (en) * 2018-11-14 2020-05-22 杭州海康威视数字技术股份有限公司 Vehicle identification method and system
CN111476107A (en) * 2020-03-18 2020-07-31 平安国际智慧城市科技股份有限公司 Image processing method and device
US10748423B2 (en) * 2018-11-27 2020-08-18 Toyota Motor North America, Inc. Proximity-based vehicle tagging
CN111738098A (en) * 2020-05-29 2020-10-02 浪潮(北京)电子信息产业有限公司 Vehicle identification method, device, equipment and storage medium
CN111738033A (en) * 2019-03-24 2020-10-02 初速度(苏州)科技有限公司 Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
JP2020191509A (en) * 2019-05-20 2020-11-26 パナソニックi−PROセンシングソリューションズ株式会社 Vehicle monitoring system and vehicle monitoring method
CN112133100A (en) * 2020-09-16 2020-12-25 北京影谱科技股份有限公司 Vehicle detection method based on R-CNN
CN112270309A (en) * 2020-11-20 2021-01-26 罗普特科技集团股份有限公司 Vehicle access point equipment snapshot quality evaluation method and device and readable medium
CN112416570A (en) * 2020-10-15 2021-02-26 北京旷视科技有限公司 Picture stream access method and device, picture processing system and electronic equipment

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040167861A1 (en) * 2003-02-21 2004-08-26 Hedley Jay E. Electronic toll management
CN102708685A (en) * 2012-04-27 2012-10-03 南京航空航天大学 Device and method for detecting and snapshotting violation vehicles
CN105632171A (en) * 2015-12-29 2016-06-01 安徽海兴泰瑞智能科技有限公司 Traffic road condition video monitoring method
CN106682644A (en) * 2017-01-11 2017-05-17 同观科技(深圳)有限公司 Double dynamic vehicle monitoring management system and method based on mobile vedio shooting device
CN109254833A (en) * 2017-07-12 2019-01-22 杭州海康威视数字技术股份有限公司 Picture analyzing method, apparatus and system, computer equipment
CN107871011A (en) * 2017-11-21 2018-04-03 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN108550264A (en) * 2018-06-22 2018-09-18 泉州创先力智能科技有限公司 A kind of road monitoring method, device, equipment and storage medium
CN109147341A (en) * 2018-09-14 2019-01-04 杭州数梦工场科技有限公司 Violation vehicle detection method and device
US10475338B1 (en) * 2018-09-27 2019-11-12 Melodie Noel Monitoring and reporting traffic information
CN111191481A (en) * 2018-11-14 2020-05-22 杭州海康威视数字技术股份有限公司 Vehicle identification method and system
US10748423B2 (en) * 2018-11-27 2020-08-18 Toyota Motor North America, Inc. Proximity-based vehicle tagging
CN111738033A (en) * 2019-03-24 2020-10-02 初速度(苏州)科技有限公司 Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
JP2020191509A (en) * 2019-05-20 2020-11-26 パナソニックi−PROセンシングソリューションズ株式会社 Vehicle monitoring system and vehicle monitoring method
CN110503042A (en) * 2019-08-23 2019-11-26 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN111476107A (en) * 2020-03-18 2020-07-31 平安国际智慧城市科技股份有限公司 Image processing method and device
CN111738098A (en) * 2020-05-29 2020-10-02 浪潮(北京)电子信息产业有限公司 Vehicle identification method, device, equipment and storage medium
CN112133100A (en) * 2020-09-16 2020-12-25 北京影谱科技股份有限公司 Vehicle detection method based on R-CNN
CN112416570A (en) * 2020-10-15 2021-02-26 北京旷视科技有限公司 Picture stream access method and device, picture processing system and electronic equipment
CN112270309A (en) * 2020-11-20 2021-01-26 罗普特科技集团股份有限公司 Vehicle access point equipment snapshot quality evaluation method and device and readable medium

Also Published As

Publication number Publication date
CN113012439B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN112633384B (en) Object recognition method and device based on image recognition model and electronic equipment
CN113642431A (en) Training method and device of target detection model, electronic equipment and storage medium
CN114494784A (en) Deep learning model training method, image processing method and object recognition method
CN112560862A (en) Text recognition method and device and electronic equipment
CN113392253B (en) Visual question-answering model training and visual question-answering method, device, equipment and medium
CN114863437B (en) Text recognition method and device, electronic equipment and storage medium
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN113326773A (en) Recognition model training method, recognition method, device, equipment and storage medium
CN114022865A (en) Image processing method, apparatus, device and medium based on lane line recognition model
CN113963197A (en) Image recognition method and device, electronic equipment and readable storage medium
CN116309963B (en) Batch labeling method and device for images, electronic equipment and storage medium
CN116246287B (en) Target object recognition method, training device and storage medium
CN115457329B (en) Training method of image classification model, image classification method and device
CN113012439B (en) Vehicle detection method, device, equipment and storage medium
CN115761698A (en) Target detection method, device, equipment and storage medium
CN115909357A (en) Target identification method based on artificial intelligence, model training method and device
CN114998387A (en) Object distance monitoring method and device, electronic equipment and storage medium
CN114842541A (en) Model training and face recognition method, device, equipment and storage medium
CN114612971A (en) Face detection method, model training method, electronic device, and program product
CN113936158A (en) Label matching method and device
CN113706705A (en) Image processing method, device and equipment for high-precision map and storage medium
CN113033431A (en) Optical character recognition model training and recognition method, device, equipment and medium
CN114173158A (en) Face recognition method, cloud device, client device, electronic device and medium
CN113420681A (en) Behavior recognition and model training method, apparatus, storage medium, and program product
CN112560848A (en) Training method and device of POI (Point of interest) pre-training model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant