CN112818847A - Vehicle detection method, device, computer equipment and storage medium - Google Patents
Vehicle detection method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112818847A CN112818847A CN202110129953.8A CN202110129953A CN112818847A CN 112818847 A CN112818847 A CN 112818847A CN 202110129953 A CN202110129953 A CN 202110129953A CN 112818847 A CN112818847 A CN 112818847A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- vehicles
- distance
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000004590 computer program Methods 0.000 claims description 26
- 230000015654 memory Effects 0.000 claims description 18
- 239000002131 composite material Substances 0.000 claims description 10
- 238000010008 shearing Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000012550 audit Methods 0.000 abstract description 3
- 238000013136 deep learning model Methods 0.000 description 16
- 238000013145 classification model Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000012795 verification Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
- H04L67/025—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to a vehicle detection method, a vehicle detection device, a computer device and a storage medium. The method comprises the following steps: acquiring an image to be detected; detecting an image to be detected, and if determining that a plurality of vehicles and associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects; acquiring a first distance between the two vehicles according to the position information of the two vehicles; and if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation. The method automatically audits the acquired vehicle illegal image based on the deep learning theory, so that the auditing efficiency of the vehicle illegal image can be improved, and the labor cost can be saved; by determining the two vehicles associated with the associated object first and then judging whether the vehicles are illegal based on the distance between the two vehicles associated with the associated object, the accuracy of auditing can be improved.
Description
Technical Field
The present application relates to the field of vehicle detection technologies, and in particular, to a vehicle detection method, an apparatus, a computer device, and a storage medium.
Background
As the amount of vehicles kept increases, the probability of traffic accidents occurring on the road increases. In the face of such circumstances, it is necessary to require the running vehicle to maintain a certain safe distance. For example, two vehicles in front of and behind the same lane on the expressway need to keep a certain safety distance; when the vehicle is in failure and needs to be moved to a safe zone through the tractor, the tractor and the towed vehicle should keep a certain safe distance in the moving process.
With the application of security technology in the field of intelligent transportation, if the intelligent transportation system detects that the distance between vehicles is less than the safe distance, it can be judged that the vehicles belong to illegal behaviors, and then a camera is triggered to capture to obtain an illegal vehicle image. And for the captured illegal vehicle image, secondary verification is required. In the related art, a manual screening mode is often adopted to carry out secondary audit on the captured illegal vehicle images, so that the problem of low efficiency exists.
Disclosure of Invention
In view of the above, it is necessary to provide a vehicle detection method, an apparatus, a computer device, and a storage medium, which can improve the efficiency of performing secondary audit on a vehicle illegal image, in view of the above technical problems.
In a first aspect, an embodiment of the present application provides a vehicle detection method, where the method includes:
acquiring an image to be detected;
detecting an image to be detected, and if determining that a plurality of vehicles and associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects;
acquiring a first distance between the two vehicles according to the position information of the two vehicles;
and if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation.
In one embodiment, the detecting the image to be detected includes:
carrying out target detection on the image to be detected, and if a plurality of vehicles exist in the image to be detected, acquiring a vehicle area image corresponding to each vehicle;
positioning a license plate area in each vehicle area image to obtain a plurality of license plate area images;
performing text recognition on each license plate region image in the plurality of license plate region images to obtain a plurality of license plate information;
and if the license plate information identical to the standard license plate information exists in the plurality of pieces of license plate information, performing target detection on the image to be detected, and determining that the associated object exists in the image to be detected.
In one embodiment, the associated object is a traction device; locating, from the plurality of vehicles, two vehicles associated with the associated object according to the associated object, including:
acquiring position information of a target vehicle corresponding to the standard license plate information and position information of a traction device;
obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device;
if the second distance is smaller than a second threshold value, the target vehicle is used as a traction vehicle;
obtaining a third distance between the other vehicle and the traction device according to the position information of the other vehicle except the target vehicle in the plurality of vehicles and the position information of the traction device;
and acquiring a third distance smaller than a third threshold value, and taking the vehicle corresponding to the third distance smaller than the third threshold value as a towed vehicle.
In one embodiment, the traction device is a traction rope; the position information of the traction device comprises position information of two end points of the traction rope; obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device, comprising:
obtaining a second distance between the target vehicle and two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope;
if the second distance is less than the second threshold, the target vehicle is taken as a towing vehicle, including:
and if the second distance between the target vehicle and one end point of the traction rope is smaller than a second threshold value, the target vehicle is taken as the traction vehicle.
In one embodiment, obtaining the third distance between the other vehicle and the traction device according to the position information of the other vehicle except the target vehicle in the plurality of vehicles and the position information of the traction device includes:
and obtaining a third distance between the other vehicle and the other end point of the traction rope according to the position information of the other vehicle and the position information of the other end point of the traction rope.
In one embodiment, acquiring an image to be detected includes:
acquiring an original image, wherein the original image is a composite image obtained by splicing a plurality of illegal images, and the illegal images are obtained by shooting the same illegal event;
identifying a boundary region in the original image;
and shearing the original image according to the boundary area to obtain a plurality of images to be detected.
In one embodiment, the method further comprises:
generating a detection result that the vehicle is legitimate when any one of the following occurs:
the first distance is greater than or equal to a first threshold;
at most one vehicle exists in the image to be detected;
no related object exists in the image to be detected;
and determining that a plurality of vehicles and associated objects exist in the image to be detected, but at most one vehicle exists in the plurality of vehicles and is associated with the associated object.
In a second aspect, an embodiment of the present application provides a vehicle detection apparatus, including:
the acquisition module is used for acquiring an image to be detected;
the detection module is used for detecting the image to be detected and determining whether a plurality of vehicles and associated objects exist in the image to be detected;
the associated vehicle positioning module is used for positioning two vehicles associated with the associated object from the plurality of vehicles according to the associated object when the plurality of vehicles and the associated object are determined to exist in the image to be detected;
the first distance generating module is used for acquiring a first distance between the two vehicles according to the position information of the two vehicles;
and the result generation module is used for generating a detection result of the vehicle violation when the first distance is smaller than the first threshold value.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the vehicle detection method according to any one of the embodiments of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the vehicle detection method described in any one of the embodiments of the first aspect.
The vehicle detection method, the vehicle detection device, the computer equipment and the storage medium are used for detecting the acquired image to be detected, and if the plurality of vehicles and the associated objects exist in the image to be detected, two vehicles associated with the associated objects are positioned from the plurality of vehicles according to the associated objects; acquiring a first distance between the two vehicles according to the position information of the two vehicles; and if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation. The obtained vehicle illegal image is automatically checked based on the deep learning theory, so that the checking efficiency of the vehicle illegal image can be improved, and the labor cost can be saved; by determining the two vehicles associated with the associated object first and then judging whether the vehicles are illegal based on the distance between the two vehicles associated with the associated object, the accuracy of auditing can be improved.
Drawings
FIG. 1 is a diagram of an exemplary vehicle detection system;
FIG. 2 is a diagram of an exemplary embodiment of a vehicle detection method;
FIG. 3 is a schematic flow chart diagram of a vehicle detection method in one embodiment;
FIG. 4 is a schematic flowchart illustrating the detecting step performed on the image to be detected in one embodiment;
FIG. 5 is a schematic flow chart illustrating steps for determining two vehicles associated with a traction device in one embodiment;
FIG. 6 is a schematic flow chart diagram of a vehicle detection method in one embodiment;
FIG. 7 is a block diagram showing the construction of a vehicle detecting apparatus according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vehicle detection method provided by the application can be applied to the application environment shown in fig. 1. The terminal 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. One or more pre-trained deep learning models for detecting images to be detected and judgment logic for judging whether the vehicle is illegal or not according to output results of the deep learning models are deployed in the terminal 110. Specifically, the terminal 110 acquires a detection request for detecting an image to be detected. The detection request may be triggered by a user, may be triggered by the terminal 110 itself when detecting that a preset condition is met, or may be sent through other electronic devices. The terminal 110 detects an image to be detected through a pre-trained deep learning model, and determines whether a plurality of vehicles and associated objects exist in the image to be detected. And if the plurality of vehicles and the associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects. The terminal 110 acquires a first distance of the two vehicles obtained from the position information of the two vehicles and compares the first distance with a first threshold. And if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation.
In another embodiment, the vehicle detection method provided by the present application can be applied to the application environment as shown in fig. 2. Wherein the terminal 210 communicates with the server 220 through a network. The terminal 210 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 220 may be implemented by a separate server or a server cluster composed of a plurality of servers. One or more pre-trained deep learning models for detecting images to be detected and judgment logic for judging whether the vehicle is illegal or not according to output results of the deep learning models are deployed in the server 220. The server 220 obtains a detection request for detecting an image to be detected. The detection request may be triggered by the server 220 itself when it detects that a preset condition is satisfied, or may be sent through the terminal 110. The server 220 detects the image to be detected through the pre-trained deep learning model, and determines whether a plurality of vehicles and associated objects exist in the image to be detected. And if the plurality of vehicles and the associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects. The server 220 acquires a first distance of the two vehicles obtained from the position information of the two vehicles and compares the first distance with a first threshold. And if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation. The server 220 may transmit the obtained detection result to the terminal 210 so that the terminal 210 can show the detection result of whether the vehicle is illegal in the screen.
In one embodiment, as shown in fig. 3, a vehicle detection method is provided, which is described by taking the application of the method to the terminal in fig. 1 as an example, and includes the following steps:
step S310, an image to be detected is obtained.
The image to be detected is a vehicle illegal image for secondary vehicle examination and verification. The image to be detected can be acquired by an image acquisition device. The image acquisition device can be a camera, a video camera and the like in an intelligent transportation system. The vehicle includes a motor vehicle, such as an automobile. Specifically, when the intelligent transportation system preliminarily judges that the distance between two vehicles does not meet the requirement according to the acquired images, the intelligent transportation system triggers the image acquisition device to acquire and store the current vehicle illegal images. The collected vehicle illegal image is used as an image to be detected for secondary examination and verification.
Step S320, detecting the image to be detected, and determining whether there are a plurality of vehicles and associated objects in the image to be detected.
The related object is a reference object for determining whether or not the distance between the vehicles satisfies a requirement. For example, if it is currently necessary to determine whether the distance between the towing vehicle and the towed vehicle meets the requirement, the associated object may be a towing device between the two vehicles, such as a towing rope; if it is currently necessary to determine whether the distance between two vehicles on the highway meets the requirement, the associated object may be a road sign line.
Specifically, after the terminal acquires the image to be detected, the image to be detected can be detected by adopting one or more pre-trained deep learning models. For example, a pre-trained target detection model may be used to detect the image to be detected. And the terminal acquires an output result of the deep learning model. If it is determined that a plurality of vehicles and associated objects exist in the image to be detected according to the output result of the deep learning model, the step S330 is continuously performed.
Step S330, if the plurality of vehicles and the associated objects are determined to exist in the image to be detected, two vehicles associated with the associated objects are positioned from the plurality of vehicles according to the associated objects.
Wherein the two vehicles associated with the associated object may be determined according to an illegal scenario of the vehicle. For example, if the illegal scenario of the vehicle is a towing vehicle driving scenario, the two vehicles associated with the associated object may refer to a towing vehicle and a towed vehicle located at both ends of the towing device. If the illegal scene of the vehicle is a normal driving scene of the vehicle on the expressway, the two vehicles associated with the associated object may be vehicles located on the same side of the road marking.
Specifically, the detection result output by the deep learning model includes position information of the associated object and position information of each vehicle. And the terminal searches two vehicles related to the related object from the plurality of vehicles according to the position information of the related object and the position information of each vehicle. If there are two vehicles associated with the associated object, step S340 and step S350 are sequentially executed.
In step S340, a first distance between two vehicles obtained according to the position information of the two vehicles is obtained.
In step S350, if the first distance is smaller than the first threshold, a detection result of the vehicle violation is generated.
Specifically, the terminal acquires position information of two vehicles associated with the associated object. And calculating a first distance between the two vehicles according to the position information of the two vehicles. The resulting first distance is compared to a pre-configured first threshold. And if the first distance is smaller than the first threshold value, judging that the distance between the two vehicles associated with the associated object does not meet the requirement, and further generating an auditing result of vehicle violation.
The vehicle detection method comprises the steps of detecting an acquired image to be detected, and if the plurality of vehicles and associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects; acquiring a first distance between the two vehicles according to the position information of the two vehicles; and if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation. The obtained vehicle illegal image is automatically checked based on the deep learning theory, so that the checking efficiency of the vehicle illegal image can be improved, and the labor cost can be saved; by determining the two vehicles associated with the associated object first and then judging whether the vehicles are illegal based on the distance between the two vehicles associated with the associated object, the accuracy of auditing can be improved.
In one embodiment, when the terminal, in the course of executing step S310 to step S350, either of the following occurs, a detection result that the vehicle is legitimate is generated:
(1) at most one vehicle is present in the image to be detected. Specifically, the terminal needs to detect whether the distance between two vehicles meets the requirement, so that if at most one vehicle exists in the image to be detected according to the output result of the deep learning model, the legal detection result of the vehicle can be generated.
(2) No related object exists in the image to be detected. Specifically, when no associated object exists in the image to be detected, it indicates that the actual scene where the vehicle is located is different from the illegal scene to be detected, and a legal detection result of the vehicle is generated.
(3) And determining that a plurality of vehicles and associated objects exist in the image to be detected, but at most one vehicle exists in the plurality of vehicles and is associated with the associated object. For example, if the related object is a traction device, and the terminal acquires that only one vehicle is related to the traction device, it indicates that the actual scene where the vehicle is located is not the traction vehicle driving scene, and then a detection result that the vehicle is legal is generated.
(4) The first distance is equal to or greater than a first threshold. If two vehicles are associated with the associated object in the plurality of vehicles, but the first distance between the two vehicles is greater than or equal to the first threshold value, the distance between the two vehicles meets the requirement, and a detection result that the vehicles are legal is generated.
In the embodiment, whether the vehicles are illegal or not is judged according to various factors such as the number of the vehicles in the image to be detected, the association relation between the vehicles and the associated objects, the distance between the vehicles and the like, so that the terminal can be better compatible with an unknown scene under the condition of fitting with actual traffic rules in the process of automatically executing secondary auditing, and the usability of the vehicle detection method can be improved.
In one embodiment, as shown in fig. 4, step S320 of detecting an image to be detected and determining whether a plurality of vehicles and associated objects exist in the image to be detected may be implemented by:
step S321, performing target detection on the image to be detected, and if a plurality of vehicles exist in the image to be detected, acquiring a vehicle area image corresponding to each vehicle.
Specifically, the target detection of the image to be detected may employ a pre-trained first target detection model. The first target detection model may be any deep learning model that can be used for target detection, such as refindet (a single-stage-based detector), fast R-CNN (a target detection network), ssd (single Shot multi detector), yolo (young Only Look one), etc.; or a model improved based on an existing model; or a self-designed model. And after the image to be detected is obtained, inputting the image to be detected into the first target detection model. If a plurality of vehicle areas exist in the image to be detected according to the output result of the first target detection model, each vehicle area can be extracted and stored in a cutting mode and the like, and a corresponding vehicle area image is obtained.
Step S322, positioning the license plate area in each vehicle area image to obtain a plurality of license plate area images.
Specifically, the terminal inputs each of the obtained vehicle region images to the second object detection model. The second target detection model may be any deep learning model that can be used for target detection, such as RefineDet, Faster R-CNN, SSD, YOLO, and the like; or a model improved based on an existing model; or a self-designed model. The second target detection model detects whether a license plate region exists in each vehicle region image. If the license plate region exists in the vehicle region image, the license plate region in the vehicle region image can be extracted and stored in a cutting mode and the like, and a corresponding license plate region image is obtained.
Step S323, text recognition is carried out on each license plate region image in the plurality of license plate region images to obtain a plurality of license plate information.
Specifically, performing text recognition on each license plate region image may be based on a pre-trained text recognition model. The text recognition model may adopt any deep learning model that can be used for text recognition, such as CNN (Convolutional Neural Networks), LSTM (Long Short-Term Memory), and the like; or a model improved based on an existing model; or a self-designed model. And the terminal inputs the obtained images of each license plate area into the text recognition model. And identifying each license plate region image through a text identification model to obtain license plate information corresponding to each license plate region image.
Step S324, if the license plate information identical to the standard license plate information exists in the plurality of license plate information, performing target detection on the image to be detected, and determining that the associated object exists in the image to be detected.
The standard license plate information may refer to license plate information of vehicles that are checked for the first time as illegal vehicles. When the intelligent traffic system judges that the distance between two running vehicles does not meet the requirement, the intelligent traffic system acquires the illegal vehicle image. And recognizing the vehicle illegal image to obtain standard license plate information of the illegal vehicle. The intelligent traffic system establishes a corresponding relation between the vehicle illegal image and the standard license plate information. And acquiring corresponding standard license plate information when the vehicle illegal image is subjected to secondary verification.
Specifically, the terminal compares each license plate information output by the text recognition model with standard license plate information. If the license plate information is the same as the standard license plate information, the target vehicle which needs secondary verification whether is illegal exists in the image to be detected. And then, continuously detecting the image to be detected, and judging whether the image to be detected has the associated object. Further, if the terminal judges that all license plate information output by the text recognition model is different from standard license plate information, it indicates that no target vehicle needing secondary checking is illegal exists in the image to be detected, and a vehicle legal detection result is generated.
In the embodiment, whether the target vehicle exists in the image to be detected is judged according to the comparison result of all license plate information and standard license plate information. And under the condition that the target vehicle does not exist, generating a legal detection result of the target vehicle and directly finishing the secondary auditing process, thereby improving the efficiency of the secondary auditing.
In one embodiment, the associated object is a traction device; as shown in fig. 5, in step S330, if it is determined that a plurality of vehicles and associated objects exist in the image to be detected, two vehicles associated with the associated objects are located from the plurality of vehicles according to the associated objects, and the following steps are performed:
step S331, position information of the target vehicle corresponding to the standard license plate information and position information of the traction device are obtained.
Step S332, obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device.
In step S333, if the second distance is smaller than the second threshold, the target vehicle is regarded as the towing vehicle.
The target vehicle is a vehicle to be subjected to secondary auditing for violation. Specifically, the terminal acquires the position information of the target vehicle output by the deep learning model and the position information of the traction device. The position information of the target vehicle can be the position information of a rectangular area where the target vehicle is located; the position information of the traction device may be position information of a rectangular area in which the traction device is located. And the terminal acquires the position information of the preset point from the position information of the target vehicle and the traction device. The preset point may be a preset end point or a center point of the rectangular area. And calculating to obtain a second distance between the target vehicle and the traction device according to the position information of the preset point of the target vehicle and the position information of the preset point of the traction device. The acquired second distance is compared with a second threshold. If the second distance is less than the second threshold, then the target vehicle is indicated as being a vehicle associated with the towing arrangement, and the target vehicle may be considered as a towing vehicle.
In step S334, a third distance between the other vehicle and the traction device is obtained based on the position information of the other vehicle except the target vehicle among the plurality of vehicles and the position information of the traction device.
Step S335 acquires a third distance smaller than a third threshold, and takes the vehicle corresponding to the third distance smaller than the third threshold as a towed vehicle.
Specifically, due to the fact that a plurality of vehicles exist in the image to be detected, after the terminal determines that the target vehicle is associated with the traction device, the terminal continues to calculate the third distance between each other vehicle except the target vehicle and the traction device. The manner of calculating the third distance may refer to the calculation process of the second distance, which is not specifically described herein. The terminal compares each acquired third distance with a third threshold. If the third distance is smaller than the third threshold value, the vehicle corresponding to the third distance smaller than the third threshold value is another vehicle associated with the traction device, and the vehicle can be regarded as a towed vehicle. Wherein the second threshold and the third threshold may be the same value.
In the embodiment, by determining whether the vehicle is the vehicle associated with the traction device according to the distance between the vehicle and the traction device, the positioning results of the traction vehicle and the towed vehicle can be accurately obtained, so that the accuracy of vehicle detection is improved.
In one embodiment, the traction device is a traction rope; the position information of the traction device comprises position information of two end points of the traction rope; step S332, obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device, includes: and obtaining a second distance between the target vehicle and the two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope. Step S333, if the second distance is smaller than the second threshold, regarding the target vehicle as a towing vehicle, including: and if the second distance between the target vehicle and one end point of the traction rope is smaller than a second threshold value, the target vehicle is taken as the traction vehicle.
Specifically, when the traction device is a traction rope, the output result of the deep learning model includes position information of two end points of the traction rope. And the terminal respectively calculates to obtain second distances between the target vehicle and the two end points according to the position information of the two end points of the traction rope and the position information of the target vehicle. Each second distance is then compared to a second threshold. And if the second distance between the target vehicle and one of the end points is smaller than a second threshold value, the target vehicle is taken as a traction vehicle. Further, if the second distances between the target vehicle and the two end points are both smaller than the second threshold value, or the second distances between the target vehicle and the two end points are both greater than or equal to the second threshold value, it indicates that the target vehicle does not pull another vehicle, and a detection result that the vehicle is legal may be generated.
In this embodiment, when the traction device is a traction rope, the distance between the two end points of the traction rope and the target vehicle is calculated, so that whether the target vehicle is a traction vehicle or not can be accurately judged, and the accuracy of vehicle detection can be improved.
In one embodiment, the step S334 of obtaining a third distance between the other vehicle and the traction device according to the position information of the other vehicle except the target vehicle and the position information of the traction device in the plurality of vehicles includes: and obtaining a third distance between the other vehicle and the other end point of the traction device according to the position information of the other vehicle and the position information of the other end point of the traction device.
Specifically, when the traction device is a traction rope and the second distance between the target vehicle and one end point of the traction rope is smaller than the second threshold value, the terminal acquires the position information of the other end point. The other end point refers to an end point whose distance from the target vehicle is greater than or equal to a second threshold value. A third distance is calculated for the other end point and each of the other vehicles, respectively. The resulting third distance is then compared to a third threshold. If the third distance is smaller than the third threshold value, the vehicle corresponding to the third distance is taken as a towed vehicle; if the third distance is not smaller than the third threshold, it indicates that the vehicle towed by the target vehicle does not exist in the other vehicles, and a detection result that the vehicle is legal may be generated.
In this embodiment, when the traction device is a traction rope, the distance between the end point of the traction rope and each other vehicle is calculated, so that the towed vehicle can be accurately screened from the other vehicles, and the accuracy of vehicle detection can be improved.
In one embodiment, step S310, acquiring an image to be detected includes: acquiring an original image, wherein the original image is a composite image obtained by splicing a plurality of illegal images, and the illegal images are obtained by shooting the same illegal event; identifying a boundary region in the original image; and shearing the original image according to the boundary area to obtain a plurality of images to be detected.
Specifically, for the same illegal event, the intelligent transportation system usually collects multiple vehicle illegal images. If the target vehicle violation is checked for the first time according to the multiple violation images, the intelligent transportation system can splice the shot multiple vehicle violation images to obtain a composite image. When the vehicle violation condition is checked for the second time, if the terminal detects that the acquired image is a composite image, the rectangular frame can be adopted to traverse the composite image in a sliding manner, and an ROI (region of interest) image is acquired. And inputting the ROI image into a pre-trained classification model, and predicting whether the ROI image is a boundary region or not through the classification model. Wherein the classification model may be a binary classification model. And after the boundary area of the synthetic image is obtained, the terminal cuts the synthetic image according to the obtained boundary area to obtain a plurality of images to be detected. And then each image to be detected is detected by adopting the vehicle detection method of any one of the embodiments. If the detection result of any image to be detected is the vehicle violation, the detection result of the vehicle violation can be generated.
In the embodiment, the boundary area of the synthetic image is obtained based on the deep learning theory, and a plurality of images to be detected are obtained based on the boundary area shearing, so that the automation degree of vehicle detection can be improved, and the efficiency of vehicle detection is improved.
In one embodiment, as shown in fig. 6, a vehicle detection method is provided, and in this embodiment, the associated object is a traction rope. Taking the application of the method to the terminal as an example for explanation, the method comprises the following steps:
step S601, obtaining an original image, wherein the original image is a composite image obtained by splicing a plurality of vehicle illegal images, and the plurality of vehicle illegal images are obtained by shooting the same illegal event.
Step S602, a boundary area in the original image is obtained by adopting the pre-trained classification model for identification.
Specifically, a rectangular frame is adopted to slide and traverse the original image to acquire an ROI image. And inputting the ROI image into a pre-trained classification model, and predicting whether the ROI image is a boundary region or not through the classification model. The classification model may be a binary classification model.
And step S603, shearing the original image according to the boundary area to obtain a plurality of images to be detected.
Step S604, for each image to be detected, detecting the image to be detected by adopting a first target detection model, and judging whether a plurality of vehicles exist in the image to be detected. If a plurality of vehicles exist, acquiring a vehicle area image of each vehicle, and continuing to execute the step S605; if there is at most one vehicle, step S616 is executed to generate a detection result that the vehicle is legal. The first target detection model may adopt YOLO V4(YOLO version 4). The basic network of YOLO V4 adopts ghost net (a lightweight neural network), so that the detection speed of the first target detection model can be increased.
And step S605, positioning the license plate area in each vehicle area image through the second target detection model.
Wherein, the second target detection model may adopt SSD. The basic network of the SSD adopts shuffle _ V2 (a lightweight neural network, version 2), so that the detection speed of the second target detection model can be increased.
Step S606, performing text recognition on each license plate region image in the plurality of license plate region images through the text recognition model to obtain a plurality of license plate information. Wherein, the text recognition model can adopt an LSTM model.
Step S607, if the license plate information identical to the standard license plate information exists in the plurality of license plate information, taking the vehicle corresponding to the standard license plate information as a target vehicle, and continuing to execute step S608; otherwise, step S616 is executed to generate a detection result that the vehicle is legal.
And step S608, carrying out target detection on the image to be detected, and determining whether a traction rope exists in the image to be detected. If yes, continuing to execute step S609; otherwise, step S616 is executed to generate a detection result that the vehicle is legal. In step S608, the target detection on the image to be detected may adopt a first target detection model or a second target detection model, or may adopt a separately trained target detection model.
And step S609, obtaining a second distance between the target vehicle and two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope.
Step S610, if the second distance between the target vehicle and one end point of the traction rope is smaller than a second threshold value, the target vehicle is taken as the traction vehicle, and the step S611 is continuously executed; otherwise, step S616 is executed to generate a detection result that the vehicle is legal.
In step S611, a third distance between the other vehicle and the other end point of the pulling rope is obtained according to the position information of the other vehicle and the position information of the other end point of the pulling rope.
Step S612, if the third distance is smaller than the third threshold, taking the vehicle corresponding to the third distance smaller than the third threshold as a towed vehicle, and continuing to execute step S613; otherwise, step S616 is executed to generate a detection result that the vehicle is legal.
In step S613, a first distance between the two vehicles obtained from the position information of the towing vehicle and the towed vehicle is acquired.
In step S614, the first distance is compared with a first threshold. If the first distance is smaller than the first threshold, continuing to execute step S615 to generate a detection result of the vehicle violation; otherwise, step S616 is executed to generate a detection result that the vehicle is legal.
Step S615, a detection result of the vehicle violation is generated.
In step S616, a detection result that the vehicle is legitimate is generated.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
In one embodiment, as shown in fig. 7, there is provided a vehicle detection apparatus 700 including: an acquisition module 701, a detection module 702, an associated vehicle positioning module 703, a distance generation module 704, and a result generation module 705, wherein:
an obtaining module 701, configured to obtain an image to be detected; a detection module 702, configured to detect an image to be detected, and determine whether a plurality of vehicles and associated objects exist in the image to be detected; the associated vehicle positioning module 703 is configured to, when it is determined that a plurality of vehicles and associated objects exist in the image to be detected, position two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects; a distance generating module 704, configured to obtain a first distance between two vehicles according to the position information of the two vehicles; the result generation module 705 is configured to generate a detection result of the vehicle violation when the first distance is smaller than the first threshold.
In one embodiment, the detection module 702 includes: the first target detection unit is used for carrying out target detection on the image to be detected, and if a plurality of vehicles exist in the image to be detected, a vehicle area image corresponding to each vehicle is obtained; the license plate region detection unit is used for positioning a license plate region in each vehicle region image to obtain a plurality of license plate region images; the text recognition unit is used for performing text recognition on each license plate region image in the plurality of license plate region images to obtain a plurality of license plate information; and the second target detection unit is used for carrying out target detection on the image to be detected when the license plate information identical to the standard license plate information exists in the plurality of license plate information, and determining that the associated object exists in the image to be detected.
In one embodiment, the associated object is a traction device; an associated vehicle localization module 703 comprising: the first acquisition unit is used for acquiring the position information of a target vehicle corresponding to the standard license plate information and the position information of the traction device; the second distance generating unit is used for obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device; the first comparison unit is used for comparing the second distance with a second threshold value, and if the second distance is smaller than the second threshold value, the target vehicle is taken as a traction vehicle; a third distance generating unit configured to obtain a third distance between the other vehicle and the traction device based on the position information of the other vehicle except the target vehicle among the plurality of vehicles and the position information of the traction device; and the second comparison unit is used for comparing the third distance with a third threshold value, acquiring a third distance smaller than the third threshold value, and taking the vehicle corresponding to the third distance smaller than the third threshold value as a towed vehicle.
In one embodiment, the traction device is a traction rope; the position information of the traction device comprises position information of two end points of the traction rope; the second distance generating unit is used for obtaining a second distance between the target vehicle and two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope; and the first comparison unit is used for comparing the second distance between the target vehicle and one end point of the traction rope with a second threshold value, and if the second distance between the target vehicle and one end point of the traction rope is smaller than the second threshold value, the target vehicle is taken as the traction vehicle.
In an embodiment, the third distance generating unit is configured to obtain the third distance between the other vehicle and the other end point of the pull rope based on the position information of the other vehicle and the position information of the other end point of the pull rope.
In one embodiment, the obtaining module 701 includes: the second acquisition unit is used for acquiring an original image, wherein the original image is a composite image obtained by splicing a plurality of illegal images, and the illegal images are obtained by shooting the same illegal event; a boundary identifying unit for identifying a boundary region in the original image; and the shearing unit is used for shearing the original image according to the boundary area to obtain a plurality of images to be detected.
In one embodiment, the detection that the vehicle is legitimate is generated when any of the following occurs: the first distance is greater than or equal to a first threshold; at most one vehicle exists in the image to be detected; no related object exists in the image to be detected; and determining that a plurality of vehicles and associated objects exist in the image to be detected, but at most one vehicle exists in the plurality of vehicles and is associated with the associated object.
For specific limitations of the vehicle detection device, reference may be made to the above limitations of the vehicle detection method, which are not described herein again. The respective modules in the vehicle detection apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a vehicle detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image to be detected; detecting an image to be detected, and if determining that a plurality of vehicles and associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects; acquiring a first distance between the two vehicles according to the position information of the two vehicles; and if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation.
In one embodiment, the processor, when executing the computer program, performs the steps of:
carrying out target detection on the image to be detected, and if a plurality of vehicles exist in the image to be detected, acquiring a vehicle area image corresponding to each vehicle; positioning a license plate area in each vehicle area image to obtain a plurality of license plate area images; performing text recognition on each license plate region image in the plurality of license plate region images to obtain a plurality of license plate information; and if the license plate information identical to the standard license plate information exists in the plurality of pieces of license plate information, performing target detection on the image to be detected, and determining that the associated object exists in the image to be detected.
In one embodiment, the associated object is a traction device; the processor, when executing the computer program, implements the steps of:
acquiring position information of a target vehicle corresponding to the standard license plate information and position information of a traction device; obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device; if the second distance is smaller than a second threshold value, the target vehicle is used as a traction vehicle; obtaining a third distance between the other vehicle and the traction device according to the position information of the other vehicle except the target vehicle in the plurality of vehicles and the position information of the traction device; and acquiring a third distance smaller than a third threshold value, and taking the vehicle corresponding to the third distance smaller than the third threshold value as a towed vehicle.
In one embodiment, the traction device is a traction rope; the position information of the traction device comprises position information of two end points of the traction rope; the processor, when executing the computer program, implements the steps of:
obtaining a second distance between the target vehicle and two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope; and if the second distance between the target vehicle and one end point of the traction rope is smaller than a second threshold value, the target vehicle is taken as the traction vehicle.
In one embodiment, the processor, when executing the computer program, performs the steps of:
and obtaining a third distance between the other vehicle and the other end point of the traction rope according to the position information of the other vehicle and the position information of the other end point of the traction rope.
In one embodiment, the processor, when executing the computer program, performs the steps of:
acquiring an original image, wherein the original image is a composite image obtained by splicing a plurality of illegal images, and the illegal images are obtained by shooting the same illegal event; identifying a boundary region in the original image; and shearing the original image according to the boundary area to obtain a plurality of images to be detected.
In one embodiment, the processor, when executing the computer program, performs the steps of:
generating a detection result that the vehicle is legitimate when any one of the following occurs: the first distance is greater than or equal to a first threshold; at most one vehicle exists in the image to be detected; no related object exists in the image to be detected; and determining that a plurality of vehicles and associated objects exist in the image to be detected, but at most one vehicle exists in the plurality of vehicles and is associated with the associated object.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be detected; detecting an image to be detected, and if determining that a plurality of vehicles and associated objects exist in the image to be detected, positioning two vehicles associated with the associated objects from the plurality of vehicles according to the associated objects; acquiring a first distance between the two vehicles according to the position information of the two vehicles; and if the first distance is smaller than the first threshold value, generating a detection result of the vehicle violation.
In one embodiment, the computer program when executed by the processor implements the steps of:
carrying out target detection on the image to be detected, and if a plurality of vehicles exist in the image to be detected, acquiring a vehicle area image corresponding to each vehicle; positioning a license plate area in each vehicle area image to obtain a plurality of license plate area images; performing text recognition on each license plate region image in the plurality of license plate region images to obtain a plurality of license plate information; and if the license plate information identical to the standard license plate information exists in the plurality of pieces of license plate information, performing target detection on the image to be detected, and determining that the associated object exists in the image to be detected.
In one embodiment, the associated object is a traction device; the computer program when executed by a processor implements the steps of:
acquiring position information of a target vehicle corresponding to the standard license plate information and position information of a traction device; obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device; if the second distance is smaller than a second threshold value, the target vehicle is used as a traction vehicle; obtaining a third distance between the other vehicle and the traction device according to the position information of the other vehicle except the target vehicle in the plurality of vehicles and the position information of the traction device; and acquiring a third distance smaller than a third threshold value, and taking the vehicle corresponding to the third distance smaller than the third threshold value as a towed vehicle.
In one embodiment, the traction device is a traction rope; the position information of the traction device comprises position information of two end points of the traction rope; the computer program when executed by a processor implements the steps of:
obtaining a second distance between the target vehicle and two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope; and if the second distance between the target vehicle and one end point of the traction rope is smaller than a second threshold value, the target vehicle is taken as the traction vehicle.
In one embodiment, the computer program when executed by the processor implements the steps of:
and obtaining a third distance between the other vehicle and the other end point of the traction rope according to the position information of the other vehicle and the position information of the other end point of the traction rope.
In one embodiment, the computer program when executed by the processor implements the steps of:
acquiring an original image, wherein the original image is a composite image obtained by splicing a plurality of illegal images, and the illegal images are obtained by shooting the same illegal event; identifying a boundary region in the original image; and shearing the original image according to the boundary area to obtain a plurality of images to be detected.
In one embodiment, the computer program when executed by the processor implements the steps of:
generating a detection result that the vehicle is legitimate when any one of the following occurs: the first distance is greater than or equal to a first threshold; at most one vehicle exists in the image to be detected; no related object exists in the image to be detected; and determining that a plurality of vehicles and associated objects exist in the image to be detected, but at most one vehicle exists in the plurality of vehicles and is associated with the associated object.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A vehicle detection method, characterized in that the method comprises:
acquiring an image to be detected;
detecting the image to be detected, and if determining that a plurality of vehicles and associated objects exist in the image to be detected, detecting the image to be detected
Locating two vehicles associated with the association object from the plurality of vehicles according to the association object;
acquiring a first distance between the two vehicles according to the position information of the two vehicles;
and if the first distance is smaller than a first threshold value, generating a detection result of the vehicle violation.
2. The method according to claim 1, wherein the detecting the image to be detected comprises:
carrying out target detection on the image to be detected, and if the plurality of vehicles exist in the image to be detected, acquiring a vehicle area image corresponding to each vehicle;
positioning a license plate area in each vehicle area image to obtain a plurality of license plate area images;
performing text recognition on each license plate region image in the plurality of license plate region images to obtain a plurality of license plate information;
and if the license plate information identical to the standard license plate information exists in the plurality of license plate information, performing target detection on the image to be detected, and determining that the associated object exists in the image to be detected.
3. The method of claim 2, wherein the associated object is a traction device; the locating, according to the associated object, two vehicles associated with the associated object from the plurality of vehicles includes:
acquiring position information of a target vehicle corresponding to the standard license plate information and position information of the traction device;
obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device;
if the second distance is smaller than a second threshold value, the target vehicle is used as a traction vehicle;
obtaining a third distance between the other vehicle and the traction device according to the position information of the other vehicle except the target vehicle in the plurality of vehicles and the position information of the traction device;
and acquiring a third distance smaller than a third threshold value, and taking the vehicle corresponding to the third distance smaller than the third threshold value as a towed vehicle.
4. The method of claim 3, wherein the pulling device is a pull rope; the position information of the traction device comprises position information of two end points of the traction rope;
the obtaining a second distance between the target vehicle and the traction device according to the position information of the target vehicle and the position information of the traction device includes:
obtaining a second distance between the target vehicle and two end points of the traction rope according to the position information of the target vehicle and the position information of the two end points of the traction rope;
if the second distance is smaller than a second threshold, regarding the target vehicle as a towing vehicle, including:
and if the second distance between the target vehicle and one end point of the traction rope is smaller than the second threshold value, taking the target vehicle as the traction vehicle.
5. The method of claim 4, wherein the deriving a third distance between the other vehicle and the traction device from the position information of the other vehicle of the plurality of vehicles except the target vehicle and the position information of the traction device comprises:
and obtaining a third distance between the other vehicle and the other end point of the pulling rope according to the position information of the other vehicle and the position information of the other end point of the pulling rope.
6. The method according to any one of claims 1 to 5, wherein the acquiring the image to be detected comprises:
acquiring an original image, wherein the original image is a composite image obtained by splicing a plurality of illegal images, and the illegal images are obtained by shooting the same illegal event;
identifying a boundary region in the original image;
and shearing the original image according to the boundary area to obtain a plurality of images to be detected.
7. The method according to any one of claims 1 to 5, further comprising:
generating a detection result that the vehicle is legitimate when any one of the following occurs:
the first distance is greater than or equal to the first threshold;
at most one vehicle exists in the image to be detected;
the related object does not exist in the image to be detected;
determining that the plurality of vehicles and the associated object exist in the image to be detected, but at most one vehicle exists in the plurality of vehicles and is associated with the associated object.
8. A vehicle detection apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an image to be detected;
the detection module is used for detecting the image to be detected and determining whether a plurality of vehicles and associated objects exist in the image to be detected;
the associated vehicle positioning module is used for positioning two vehicles associated with the associated object from the plurality of vehicles according to the associated object when the plurality of vehicles and the associated object are determined to exist in the image to be detected;
the first distance generation module is used for acquiring a first distance between the two vehicles according to the position information of the two vehicles;
and the result generation module is used for generating a detection result of the vehicle violation when the first distance is smaller than a first threshold value.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110129953.8A CN112818847A (en) | 2021-01-29 | 2021-01-29 | Vehicle detection method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110129953.8A CN112818847A (en) | 2021-01-29 | 2021-01-29 | Vehicle detection method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112818847A true CN112818847A (en) | 2021-05-18 |
Family
ID=75860347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110129953.8A Pending CN112818847A (en) | 2021-01-29 | 2021-01-29 | Vehicle detection method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112818847A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963350A (en) * | 2021-11-08 | 2022-01-21 | 西安链科信息技术有限公司 | Vehicle identification detection method, system, computer equipment, storage medium and terminal |
-
2021
- 2021-01-29 CN CN202110129953.8A patent/CN112818847A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963350A (en) * | 2021-11-08 | 2022-01-21 | 西安链科信息技术有限公司 | Vehicle identification detection method, system, computer equipment, storage medium and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021232229A1 (en) | Virtual scene generation method and apparatus, computer device and storage medium | |
CN110706261A (en) | Vehicle violation detection method and device, computer equipment and storage medium | |
CN112085952B (en) | Method and device for monitoring vehicle data, computer equipment and storage medium | |
CN110517500B (en) | Man-vehicle association processing method and device | |
CN111563494A (en) | Behavior identification method and device based on target detection and computer equipment | |
CN110826484A (en) | Vehicle weight recognition method and device, computer equipment and model training method | |
US20210295441A1 (en) | Using vehicle data and crash force data in determining an indication of whether a vehicle in a vehicle collision is a total loss | |
CN111160275B (en) | Pedestrian re-recognition model training method, device, computer equipment and storage medium | |
CN111539317A (en) | Vehicle illegal driving detection method and device, computer equipment and storage medium | |
CN112580457A (en) | Vehicle video processing method and device, computer equipment and storage medium | |
US11120308B2 (en) | Vehicle damage detection method based on image analysis, electronic device and storage medium | |
CN112733598A (en) | Vehicle law violation determination method and device, computer equipment and storage medium | |
CN110619256A (en) | Road monitoring detection method and device | |
CN111652137A (en) | Illegal vehicle detection method and device, computer equipment and storage medium | |
JP2018124963A (en) | Image processing device, image recognition device, image processing program, and image recognition program | |
CN111368728A (en) | Safety monitoring method and device, computer equipment and storage medium | |
CN112818847A (en) | Vehicle detection method, device, computer equipment and storage medium | |
Lincy et al. | Road Pothole Detection System | |
Anagnostopoulos et al. | Predicting roundabout lane capacity using artificial neural networks | |
CN111210634B (en) | Intelligent traffic information processing method and device, intelligent traffic system and server | |
CN117115752A (en) | Expressway video monitoring method and system | |
CN111260932A (en) | Method and device for determining vehicle illegal behavior, computer equipment and storage medium | |
WO2018143278A1 (en) | Image processing device, image recognition device, image processing program, and image recognition program | |
CN109800685A (en) | The determination method and device of object in a kind of video | |
CN112820116A (en) | Vehicle detection method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |