CN117423101A - Method and system for identifying license-free vehicle based on deep learning - Google Patents

Method and system for identifying license-free vehicle based on deep learning Download PDF

Info

Publication number
CN117423101A
CN117423101A CN202311470136.4A CN202311470136A CN117423101A CN 117423101 A CN117423101 A CN 117423101A CN 202311470136 A CN202311470136 A CN 202311470136A CN 117423101 A CN117423101 A CN 117423101A
Authority
CN
China
Prior art keywords
vehicle
license plate
target
identification
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311470136.4A
Other languages
Chinese (zh)
Inventor
闫军
杨怀恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202311470136.4A priority Critical patent/CN117423101A/en
Publication of CN117423101A publication Critical patent/CN117423101A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/147Determination of region of interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for identifying a unlicensed vehicle based on deep learning, which relate to the field of intelligent traffic management and comprise the following steps: training a preset convolutional neural network model according to marked vehicle target information, identification information corresponding to the vehicle targets and/or identification position areas corresponding to the vehicles without the vehicle identification information, detecting, identifying and analyzing the vehicle targets and judging the vehicles without the license plates through the trained preset convolutional neural network model, realizing real-time detection and processing of the vehicles without the license plates, realizing real-time monitoring of the vehicles without the license plates, rapidly and accurately distinguishing the vehicles with the license plates and the vehicles without the license plates, ensuring the accuracy, avoiding hardware cost caused by a large amount of calculation, and providing technical support for the automation of traffic management with high efficiency and high accuracy.

Description

Method and system for identifying license-free vehicle based on deep learning
Technical Field
The invention relates to the field of intelligent traffic management, in particular to a method and a system for identifying a unlicensed vehicle based on deep learning.
Background
With the development of science and technology and the flying economy, the number of vehicles is increasing, and traffic management becomes an important part of social management. In the current traffic management system, the system has the characteristics of automation and intellectualization, mainly depends on various video acquisition devices, and realizes an on-line management, real-time management and efficient management mode by adding an accurate automatic analysis algorithm and software, thereby helping the construction of smart cities and smart society. At present, each management system mainly depends on an efficient processing algorithm for acquired image frames, and can basically realize accurate detection and identification of people, vehicles and objects so as to achieve the effects of rapidness and unmanned. However, because the road condition is complex, various types of irregular vehicles are more, and a certain number of unlicensed vehicles exist in the period, the detection and the identification of the algorithm are bothered, and the accuracy and the efficiency of the management system are further affected. The system is influenced by the shielding of various targets in the scene and the angle of the view field, and the interested targets to be detected by the system can be shielded and can be mistakenly detected or missed, so that the algorithm needs to consider a plurality of complex situations, and particularly needs to rapidly and correctly distinguish the card-bearing vehicles from the card-bearing vehicles when the algorithm faces the card-bearing vehicles.
In order to meet the algorithm requirements of licensed vehicles and unlicensed vehicles, the current common method is to use whether license plates can be detected in the detection frame area of the vehicles as a main judgment basis. However, if the scene is too complex and has a target shielding condition, or the illumination condition is worse, so that the imaging quality is poor, false detection or omission of license plate targets can be caused, and errors occur in distinguishing the licensed vehicles from the unlicensed vehicles, so that the decision and actions of the traffic management system are affected.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for identifying a card-free vehicle based on deep learning, which can solve the problems of low identification accuracy and large environmental interference of the existing card-free vehicle.
In order to achieve the above object, in one aspect, the present invention provides a method for identifying a card-free vehicle based on deep learning, the method comprising:
acquiring a plurality of video frames of a monitoring area acquired by road side parking equipment;
marking different types of targets in a plurality of video frames of a monitoring area with vehicle target information, identification information corresponding to the vehicle targets and/or identification position areas corresponding to vehicles without the vehicle identification information respectively;
training a preset convolutional neural network model according to the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information, and carrying out detection, identification and analysis on each vehicle target on the plurality of video frames through the trained preset convolutional neural network model;
and confirming whether the monitoring area has the unlicensed vehicle or not according to the detection and identification result.
Further, the step of labeling the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information to different types of targets in the plurality of video frames of the monitoring area respectively includes:
marking vehicle target information, identification information corresponding to the vehicle target and/or identification position areas corresponding to vehicles without the vehicle identification information through preset rectangular boxes according to the types corresponding to the targets in a plurality of video frames of the monitoring area, and configuring corresponding type numbers.
Further, the step of performing detection, identification and analysis of each vehicle target on the plurality of video frames through the trained preset convolutional neural network model includes:
and detecting, identifying and analyzing each vehicle target by the trained preset convolutional neural network model on the plurality of video frames to obtain the classification information and the circumscribed rectangular frame information of each detection target.
Further, the step of confirming whether the monitoring area has the card-free vehicle according to the detection and identification result comprises the following steps:
according to the detection and identification result, each detected vehicle target and/or license plate part target without license plate are determined;
traversing each vehicle target area one by one to detect whether license plate targets or license plate part targets without license plates exist or not;
and confirming whether the license plate-free vehicle exists in the monitoring area according to the intersection ratio of the license plate corresponding to the license plate target and/or the license plate position target without the license plate.
Further, the step of determining whether the license plate-free vehicle exists in the monitoring area according to the license plate corresponding to the license plate target or the intersection ratio of the license plate part targets without the license plate comprises the following steps:
when the intersection ratio of the vehicle and the license plate part target without the license plate is greater than zero, determining that the license plate-free vehicle exists in the monitoring area;
when the vehicle intersects with or does not intersect with the license plate and the license plate part without license plate, the warning information is output.
In another aspect, the present invention provides a card-free vehicle identification system based on deep learning, the system comprising: the acquisition unit is used for acquiring a plurality of video frames of the monitoring area acquired by the road side parking equipment;
the marking unit is used for marking the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information for different types of targets in a plurality of video frames of the monitoring area respectively;
the detection unit is used for training a preset convolutional neural network model according to the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information, and detecting, identifying and analyzing each vehicle target for the plurality of video frames through the trained preset convolutional neural network model;
and the confirmation unit is used for confirming whether the monitoring area has the card-free vehicle or not according to the detection and identification result.
Further, the labeling unit is specifically configured to label, according to categories corresponding to targets in a plurality of video frames of the monitoring area, vehicle target information, identification information corresponding to the vehicle target, and/or an identification position area corresponding to the vehicle without the vehicle identification information through preset rectangular boxes, and configure corresponding category numbers.
Further, the detection unit is specifically configured to perform detection, identification and analysis of each vehicle target on the plurality of video frames through a trained preset convolutional neural network model, so as to obtain classification information and external rectangular frame information of each detection target
Further, the confirmation unit is specifically configured to determine, according to the detection and identification result, each detected vehicle target and/or a license plate part target without a license plate; traversing each vehicle target area one by one to detect whether license plate targets or license plate part targets without license plates exist or not; and confirming whether the license plate-free vehicle exists in the monitoring area according to the intersection ratio of the license plate corresponding to the license plate target and/or the license plate position target without the license plate.
Further, the confirmation unit is specifically further configured to determine that a license plate-free vehicle exists in the monitoring area when the intersection ratio of the vehicle and the license plate part target without the license plate is greater than zero; when the vehicle intersects with or does not intersect with the license plate and the license plate part without license plate, the warning information is output.
According to the method and the system for identifying the unlicensed vehicle based on the deep learning, the preset convolutional neural network model is trained according to the marked vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle identification information-free vehicle, and the detection, identification and analysis of each vehicle target and the judgment of the unlicensed vehicle are carried out on the plurality of video frames through the trained preset convolutional neural network model, so that the real-time detection and processing of the unlicensed vehicle are realized, the real-time monitoring of the unlicensed vehicle is realized, the licensed vehicle and the unlicensed vehicle can be distinguished quickly and accurately, the accuracy is ensured, the hardware cost caused by a large amount of operations is avoided, and the technical support can be provided for the automation of traffic management with high efficiency and high accuracy.
Drawings
FIG. 1 is a flow chart of a method for identifying a card-free vehicle based on deep learning provided by the invention;
fig. 2 is a schematic structural diagram of a card-free vehicle recognition system based on deep learning.
Detailed Description
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
The following details the method steps, as shown in fig. 1, the method for identifying the unlicensed vehicle based on deep learning provided by the embodiment of the invention includes the following steps:
101. a plurality of video frames of a monitored area acquired by a roadside parking device are acquired.
Specifically, video frames are acquired through a plurality of video devices with different scenes and different angles under different time periods, different illumination conditions, different weather conditions and the like.
102. And respectively labeling the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information to different types of targets in a plurality of video frames of the monitoring area.
Specifically, marking vehicle target information, identification information corresponding to the vehicle target and/or identification position areas corresponding to vehicles without the vehicle identification information by presetting rectangular boxes according to the types corresponding to the targets in a plurality of video frames of the monitoring area, and configuring corresponding type numbers.
103. Training a preset convolutional neural network model according to the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information, and detecting, identifying and analyzing each vehicle target in the plurality of video frames through the trained preset convolutional neural network model.
Specifically, detection, identification and analysis of each vehicle target are carried out on the plurality of video frames through a trained preset convolutional neural network model, and classification information and circumscribed rectangular frame information of each detection target are obtained.
For example, labeling a video frame refers to labeling an object of interest in the obtained video frame with a rectangular frame, which represents the position attribute of a certain object of interest, and the rectangular frame is the minimum circumscribed rectangular frame of the outermost boundary of the object. When a video frame is marked, the interested targets in the video frame are classified and marked to represent the category attribute of a certain interested target, and in the method, the license plate parts of the vehicle, the license plate and the unlicensed vehicle are the interested targets, and the marked categories are respectively 0 category, 1 category and 2 category.
104. And confirming whether the monitoring area has the unlicensed vehicle or not according to the detection and identification result.
Specifically, according to the detection and identification result, each detected vehicle target and/or license plate part target without license plate are determined; traversing each vehicle target area one by one to detect whether license plate targets or license plate part targets without license plates exist or not; and confirming whether the license plate-free vehicle exists in the monitoring area according to the intersection ratio of the license plate corresponding to the license plate target and/or the license plate position target without the license plate.
The step of confirming whether the license plate-free vehicle exists in the monitoring area or not according to the intersection ratio of the license plate corresponding to the license plate target or the license plate part target without the license plate comprises the following steps: when the intersection ratio of the vehicle and the license plate part target without the license plate is greater than zero, determining that the license plate-free vehicle exists in the monitoring area; when the vehicle intersects with or does not intersect with the license plate and the license plate part without license plate, the warning information is output.
The method includes the steps of calculating and analyzing a video frame by using a neural network model to obtain a detection result of an interested target of the video frame, namely inputting the video frame to be detected into the neural network model for calculation to obtain classification information and external frame information of each detection target, wherein the information is a data base for subsequent processing. The judgment method of the card-carrying vehicle and the card-free vehicle comprises the following steps: traversing each vehicle target area one by one, and judging whether license plate targets or license plate position targets without license plates exist or not. And judging whether the vehicle is a license plate-free vehicle or not according to the intersection ratio (intersection over union, IOU for short) of the vehicle and the license plate or the license plate part target without the license plate. The position relationship between the vehicle and the license plate part of the license plate-free vehicle is represented by the intersection ratio; when the IOU of a certain vehicle and a certain license plate is more than 0, the vehicle and the license plate can be considered to be intersected, and an overlapping area exists, which means that the vehicle is a licensed vehicle; when the IOU of the license plate part target of a certain vehicle and a certain license plate without license plate is more than 0, the vehicle and the license plate part without license plate can be considered to be intersected, and an overlapping area exists, which means that the vehicle is a license plate-free vehicle; particularly, when a certain vehicle intersects with two targets of a license plate and a license plate part without the license plate, uncertainty is considered to exist, and an alarm is sent to a system for processing; particularly, when a certain vehicle is not intersected with two targets of a license plate and a license plate part without the license plate, uncertainty is considered to exist, and an alarm is sent to the system for processing.
For the embodiment of the present invention, specific application scenarios may be shown below, but not limited to, including: the equipment acquires a video frame to be detected, inputs the video frame to a neural network model for calculation, extracts detection frames of the vehicle, the license plate and the license plate part targets without the license plate, namely the position information of the targets, and records the position information. The target class of the vehicle is marked as A, the target class of the license plate is marked as B, and the target of the license plate part without the license plate is marked as C. Then traversing the vehicle targets A one by one, respectively calculating the intersection ratio of each vehicle target and each license plate target, judging whether the vehicle is likely to be a license plate-bearing vehicle, and marking the vehicle as a result I; respectively calculating the intersection ratio of each vehicle target and each license plate part target without license plates, judging whether the vehicle is likely to be a license plate-free vehicle or not, and marking the vehicle as a result II; if the result I and the result II are contradictory, the alarm system processes the result I and the result II. In this embodiment, the target vehicle is a normal license plate vehicle, but the license plate is blocked by the roadside trunk. According to the detection result, the vehicle A0 is detected, but in the area of the vehicle A0, neither the license plate Bi nor the license plate portion target Ci without the license plate is detected. According to the definition of the front side, the vehicle can not be judged to be a card-carrying vehicle or a card-free vehicle, and the alarm system processes the card-carrying vehicle. In this embodiment, the defect that the conventional algorithm only judges according to the detection result of the license plate is avoided, so that when the license plate is shielded, the vehicle is wrongly judged to be a license-plate-free vehicle.
As another example, the target vehicle is an older scooter. According to the detection result, since the difference between the scooter and the normal vehicle is extremely small, the scooter is erroneously detected as the vehicle A0; in the area of the vehicle A0, the license plate portion target Ci without the license plate is detected. Through calculation, the vehicle A0 and the license plate part target Ci IOU without the license plate are more than 0, and the vehicle is judged to be a license plate-free vehicle according to the definition of the front side of the vehicle. In this embodiment, because the target of interest has the license plate part of no license plate, the situation that the denormal similar license plate such as the scooter is wrongly detected as the license plate in the past algorithm is avoided, so that the wrong judgment that the vehicle is the licensed vehicle is caused.
According to the method for identifying the unlicensed vehicle based on deep learning, provided by the embodiment of the invention, the preset convolutional neural network model is trained according to the marked vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the unlicensed vehicle identification information, and the detection, identification and analysis of each vehicle target and the judgment of the unlicensed vehicle are carried out on the plurality of video frames through the trained preset convolutional neural network model, so that the real-time detection and processing of the unlicensed vehicle are realized, the real-time monitoring of the unlicensed vehicle is realized, the licensed vehicle and the unlicensed vehicle can be distinguished rapidly and accurately, the accuracy is ensured, the hardware cost caused by a large amount of operations is avoided, and the technical support can be provided for the automation of traffic management with high efficiency and high accuracy.
In order to implement the method provided by the embodiment of the present invention, the embodiment of the present invention provides a card-free vehicle recognition system based on deep learning, as shown in fig. 2, the system includes: an acquisition unit 21, a labeling unit 22, a detection unit 23, and a confirmation unit 24.
An acquisition unit 21 for acquiring a plurality of video frames of a monitoring area acquired by the roadside parking apparatus;
a labeling unit 22, configured to label, for different types of targets in a plurality of video frames of the monitoring area, vehicle target information, identification information corresponding to the vehicle target, and/or an identification position area corresponding to a vehicle without the vehicle identification information, respectively;
the detecting unit 23 is configured to train a preset convolutional neural network model according to the vehicle target information, the identification information corresponding to the vehicle target, and/or the identification position area corresponding to the vehicle without the vehicle identification information, and perform detection, identification and analysis of each vehicle target on the plurality of video frames through the trained preset convolutional neural network model;
and a confirmation unit 24 for confirming whether the monitoring area has a card-free vehicle according to the detection and identification result.
Further, the labeling unit 22 is specifically configured to label, according to the categories corresponding to the targets in the multiple video frames of the monitoring area, the vehicle target information, the identification information corresponding to the vehicle target, and/or the identification position area corresponding to the vehicle without the vehicle identification information through preset rectangular boxes, and configure corresponding category numbers.
Further, the detecting unit 23 is specifically configured to perform detection, identification and analysis of each vehicle target on the plurality of video frames through the trained preset convolutional neural network model, so as to obtain classification information and external rectangular frame information of each detection target
Further, the confirmation unit 24 is specifically configured to determine, according to the detection and recognition result, each detected vehicle target and/or a license plate part target without a license plate; traversing each vehicle target area one by one to detect whether license plate targets or license plate part targets without license plates exist or not; and confirming whether the license plate-free vehicle exists in the monitoring area according to the intersection ratio of the license plate corresponding to the license plate target and/or the license plate position target without the license plate.
Further, the confirmation unit 24 is specifically further configured to determine that the license plate-free vehicle exists in the monitoring area when the intersection ratio of the vehicle and the license plate-free target is greater than zero; when the vehicle intersects with or does not intersect with the license plate and the license plate part without license plate, the warning information is output.
According to the unlicensed vehicle identification system based on deep learning, the preset convolutional neural network model is trained according to the marked vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the unlicensed vehicle identification information, detection, identification and analysis of each vehicle target and judgment of the unlicensed vehicle are carried out on the plurality of video frames through the trained preset convolutional neural network model, real-time detection and processing of the unlicensed vehicle are achieved, real-time monitoring of the unlicensed vehicle is achieved, the licensed vehicle and the unlicensed vehicle can be distinguished rapidly and accurately, hardware cost caused by a large amount of operation is avoided while accuracy is guaranteed, and technical support can be provided for automation of traffic management with high efficiency and high accuracy.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. As will be apparent to those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks (illustrative logical block), units, and steps described in connection with the embodiments of the invention may be implemented by electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components (illustrative components), elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Those skilled in the art may implement the described functionality in varying ways for each particular application, but such implementation is not to be understood as beyond the scope of the embodiments of the present invention.
The various illustrative logical blocks or units described in the embodiments of the invention may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic system, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing systems, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a user terminal. In the alternative, the processor and the storage medium may reside as distinct components in a user terminal.
In one or more exemplary designs, the above-described functions of embodiments of the present invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer readable media includes both computer storage media and communication media that facilitate transfer of computer programs from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage systems, or any other medium that may be used to carry or store program code in the form of instructions or data structures and other data structures that may be read by a general or special purpose computer, or a general or special purpose processor. Further, any connection is properly termed a computer-readable medium, e.g., if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless such as infrared, radio, and microwave, and is also included in the definition of computer-readable medium. The disks (disks) and disks (disks) include compact disks, laser disks, optical disks, DVDs, floppy disks, and blu-ray discs where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included within the computer-readable media.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A method for identifying a card-free vehicle based on deep learning, which is characterized by comprising the following steps:
acquiring a plurality of video frames of a monitoring area acquired by road side parking equipment;
marking different types of targets in a plurality of video frames of a monitoring area with vehicle target information, identification information corresponding to the vehicle targets and/or identification position areas corresponding to vehicles without the vehicle identification information respectively;
training a preset convolutional neural network model according to the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information, and carrying out detection, identification and analysis on each vehicle target on the plurality of video frames through the trained preset convolutional neural network model;
and confirming whether the monitoring area has the unlicensed vehicle or not according to the detection and identification result.
2. The method for identifying a vehicle without license plate based on deep learning according to claim 1, wherein the step of labeling the vehicle target information, the identification information corresponding to the vehicle target, and/or the identification position area corresponding to the vehicle without the vehicle identification information in the plurality of video frames of the monitoring area respectively comprises the following steps:
marking vehicle target information, identification information corresponding to the vehicle target and/or identification position areas corresponding to vehicles without the vehicle identification information through preset rectangular boxes according to the types corresponding to the targets in a plurality of video frames of the monitoring area, and configuring corresponding type numbers.
3. The method for identifying a vehicle without license plate based on deep learning according to claim 2, wherein the step of performing detection, identification and analysis of each vehicle target on the plurality of video frames through the trained preset convolutional neural network model comprises the following steps:
and detecting, identifying and analyzing each vehicle target by the trained preset convolutional neural network model on the plurality of video frames to obtain the classification information and the circumscribed rectangular frame information of each detection target.
4. The method for recognizing a card-free vehicle based on deep learning of claim 1, wherein the step of confirming whether the card-free vehicle exists in the monitoring area according to the detection recognition result comprises:
according to the detection and identification result, each detected vehicle target and/or license plate part target without license plate are determined;
traversing each vehicle target area one by one to detect whether license plate targets or license plate part targets without license plates exist or not;
and according to the intersection ratio of the license plate target and the corresponding license plate and/or license plate position target without the license plate, determining whether the license plate-free vehicle exists in the monitoring area.
5. The method for recognizing a license-plate-free vehicle based on deep learning as claimed in claim 4, wherein the step of confirming whether the license-plate-free vehicle exists in the monitoring area according to the intersection ratio of the license plate target and the corresponding license plate and/or the license plate part target without the license plate comprises:
and when the intersection ratio of the vehicle and the license plate part target without the license plate is larger than zero, determining that the license plate-free vehicle exists in the monitoring area.
When the vehicle intersects with or does not intersect with the license plate and the license plate part without license plate, the warning information is output.
6. A card-less vehicle identification system based on deep learning, the system comprising:
the acquisition unit is used for acquiring a plurality of video frames of the monitoring area acquired by the road side parking equipment;
the marking unit is used for marking the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information for different types of targets in a plurality of video frames of the monitoring area respectively;
the detection unit is used for training a preset convolutional neural network model according to the vehicle target information, the identification information corresponding to the vehicle target and/or the identification position area corresponding to the vehicle without the vehicle identification information, and detecting, identifying and analyzing each vehicle target for the plurality of video frames through the trained preset convolutional neural network model;
and the confirmation unit is used for confirming whether the monitoring area has the card-free vehicle or not according to the detection and identification result.
7. The card-free vehicle identification system based on deep learning of claim 6, wherein,
the labeling unit is specifically configured to label the vehicle target information, the identification information corresponding to the vehicle target, and/or the identification position area corresponding to the vehicle without the vehicle identification information through a preset rectangular frame according to the types corresponding to the targets in the multiple video frames of the monitoring area, and configure corresponding type numbers.
8. The card-free vehicle identification system based on deep learning of claim 7,
the detection unit is specifically configured to perform detection, identification and analysis of each vehicle target on the plurality of video frames through a trained preset convolutional neural network model, so as to obtain classification information and circumscribed rectangular frame information of each detection target
9. The system for recognizing a vehicle without license plate based on deep learning of claim 6, wherein the confirmation unit is specifically configured to determine each detected vehicle target and/or a license plate part target without license plate according to the detection recognition result; traversing each vehicle target area one by one to detect whether license plate targets or license plate part targets without license plates exist or not; and according to the intersection ratio of the license plate target and the corresponding license plate and/or license plate position target without the license plate, determining whether the license plate-free vehicle exists in the monitoring area.
10. The card-free vehicle identification system based on deep learning of claim 9,
the confirmation unit is specifically further configured to determine that a license plate-free vehicle exists in the monitoring area when the intersection ratio of the vehicle and the license plate part target without the license plate is greater than zero; when the vehicle intersects with or does not intersect with the license plate and the license plate part without license plate, the warning information is output.
CN202311470136.4A 2023-11-07 2023-11-07 Method and system for identifying license-free vehicle based on deep learning Pending CN117423101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311470136.4A CN117423101A (en) 2023-11-07 2023-11-07 Method and system for identifying license-free vehicle based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311470136.4A CN117423101A (en) 2023-11-07 2023-11-07 Method and system for identifying license-free vehicle based on deep learning

Publications (1)

Publication Number Publication Date
CN117423101A true CN117423101A (en) 2024-01-19

Family

ID=89532406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311470136.4A Pending CN117423101A (en) 2023-11-07 2023-11-07 Method and system for identifying license-free vehicle based on deep learning

Country Status (1)

Country Link
CN (1) CN117423101A (en)

Similar Documents

Publication Publication Date Title
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN111931627A (en) Vehicle re-identification method and device based on multi-mode information fusion
CN110163176B (en) Lane line change position identification method, device, equipment and medium
CN110163107B (en) Method and device for recognizing roadside parking behavior based on video frames
CN113055823B (en) Method and device for managing shared bicycle based on road side parking
CN110246336B (en) Method and system for determining vehicle information
CN111739338A (en) Parking management method and system based on multiple types of sensors
CN113160575A (en) Traffic violation detection method and system for non-motor vehicles and drivers
CN113205689B (en) Multi-dimension-based roadside parking admission event judgment method and system
CN111898491A (en) Method and device for identifying reverse driving of vehicle and electronic equipment
CN113205691A (en) Method and device for identifying vehicle position
CN112836699A (en) Long-time multi-target tracking-based berth entrance and exit event analysis method
CN112766222B (en) Method and device for assisting in identifying vehicle behavior based on berth line
CN109325445B (en) License plate recognition and classification method and device
CN113450575B (en) Management method and device for roadside parking
CN111951601B (en) Method and device for identifying parking positions of distribution vehicles
CN113052141A (en) Method and device for detecting parking position of vehicle
CN112991769A (en) Traffic volume investigation method and device based on video
CN111768630A (en) Violation waste image detection method and device and electronic equipment
CN117423101A (en) Method and system for identifying license-free vehicle based on deep learning
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN114693722A (en) Vehicle driving behavior detection method, detection device and detection equipment
CN114220074A (en) Method and system for identifying abnormal behavior state of vehicle
CN113283303A (en) License plate recognition method and device
CN113408514A (en) Method and device for detecting roadside parking lot berth based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination