CN116012833B - License plate detection method, device, equipment, medium and program product - Google Patents
License plate detection method, device, equipment, medium and program product Download PDFInfo
- Publication number
- CN116012833B CN116012833B CN202310119014.4A CN202310119014A CN116012833B CN 116012833 B CN116012833 B CN 116012833B CN 202310119014 A CN202310119014 A CN 202310119014A CN 116012833 B CN116012833 B CN 116012833B
- Authority
- CN
- China
- Prior art keywords
- pulse
- sequence signal
- pulse sequence
- area
- license plate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 54
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 claims abstract description 146
- 230000033001 locomotion Effects 0.000 claims abstract description 85
- 238000000034 method Methods 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000012544 monitoring process Methods 0.000 claims abstract description 23
- 230000003068 static effect Effects 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 claims description 14
- 238000012546 transfer Methods 0.000 claims description 14
- 238000004891 communication Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 claims description 2
- 230000000873 masking effect Effects 0.000 claims 1
- 230000002829 reductive effect Effects 0.000 abstract description 6
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000004590 computer program Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 239000003245 coal Substances 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000025518 detection of mechanical stimulus involved in sensory perception of wind Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012421 spiking Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the disclosure discloses a license plate detection method, a device, equipment, a medium and a program product, belonging to the field of image processing, wherein the method comprises the following steps: acquiring a pulse sequence signal of a monitoring area; determining a motion area and a static area in the pulse sequence signal, and performing binarization processing on the pulse sequence signal according to the motion area and the static area to obtain binarized data; the motion area represents a pixel area corresponding to the vehicle in the monitoring area in the pulse sequence signal; license plate information of the vehicle is identified based on the binarized data. According to the method and the device for detecting the license plate, the moving area and the static area in the pulse sequence signals of the monitoring area can be identified, the binarization data are formed according to the identification result, then the license plate information of the vehicle is identified based on the binarization data, image reconstruction of all the pulse sequence signals is avoided, consumption and time delay of calculation resources are reduced, and the license plate detection efficiency and accuracy are improved.
Description
Technical Field
The present disclosure relates to image recognition technology, and in particular, to a license plate detection method, apparatus, device, medium, and program product.
Background
The pulse camera has ultrahigh time resolution, breaks through the limitation of fixed exposure time of the traditional camera, has stronger detection capability on high-speed moving targets, and is widely applied to high-speed moving scenes due to the characteristics. For example, a moving vehicle is photographed with a pulse camera to acquire a high-frequency, high-definition vehicle image, and the acquired vehicle image may be subsequently identified to acquire license plate information of the vehicle.
In the related art, when detecting license plates of vehicles in running, a pulse camera is generally used to collect space-time signals in a monitoring area to generate pulse sequence signals, then the pulse sequence signals are reconstructed into images, and then license plate information of the vehicles is identified from the reconstructed images by using a filtering algorithm or an image identification algorithm based on machine learning. In the process, on one hand, because a large number of pulse signal sequences need to be subjected to image reconstruction, larger computing resources are often required to be consumed, and larger time delay exists; on the other hand, the conventional filtering method and the trained neural network are both utilized, so that higher computational complexity is brought, and a large amount of computational resources are required to be consumed.
Disclosure of Invention
The present disclosure has been made in order to solve the above technical problems. Embodiments of the present disclosure provide a license plate detection method, apparatus, device, medium, and program product.
According to an aspect of the embodiments of the present disclosure, there is provided a license plate detection method, including: acquiring a pulse sequence signal of a monitoring area; determining a motion area and a static area in the pulse sequence signal, and performing binarization processing on the pulse sequence signal according to the motion area and the static area to obtain binarization data, wherein the motion area represents a pixel area corresponding to a vehicle in the monitoring area in the pulse sequence signal; license plate information of the vehicle is identified based on the binarized data.
In some embodiments, identifying license plate information of a vehicle based on binarized data includes: determining a target area in the pulse sequence signal based on the binarized data, wherein the target area at least comprises a coverage area of a license plate of a vehicle in the pulse sequence signal; extracting a target pulse sequence signal of pixels in a target area from the pulse sequence signal; extracting characteristic information from a target pulse sequence signal by using a pre-trained encoder; and decoding the characteristic information by using a pre-trained decoder to obtain license plate information.
In some embodiments, identifying license plate information of a vehicle based on binarized data includes: determining a mask area of the vehicle based on the binarized data; extracting a first pulse sequence signal of a pixel corresponding to a mask region from the pulse sequence signal; performing image reconstruction on the first pulse sequence signal to obtain a vehicle image; and identifying the vehicle image and determining license plate information.
In some embodiments, identifying license plate information of a vehicle based on binarized data includes: determining a license plate region of the vehicle based on the binarized data; extracting a second pulse sequence signal of a pixel corresponding to the license plate region from the pulse sequence signal; performing image reconstruction on the second pulse sequence signal to obtain a license plate image; and identifying license plate images and determining license plate information.
In some embodiments, determining a motion region and a stationary region in a pulse sequence signal, and performing binarization processing on the pulse sequence signal according to the motion region and the stationary region to obtain binarized data, including: and in response to the existence of the motion region of the pulse sequence signal, performing binarization processing on the pulse sequence signal according to the motion region and the static region to generate binarized data.
In some embodiments, determining a motion region in the pulse sequence signal comprises: determining a region to be identified in a pixel region of the pulse sequence signal; predicting whether each pixel position in the region to be identified is a motion position or not based on the latest pulse interval length and the historical pulse interval length of each pixel position in the region to be identified, and obtaining a prediction result, wherein the latest pulse interval length represents the interval time between the last received pulse and the last pulse of any pixel position, and the historical pulse interval length represents the interval time between two adjacent historical pulses received by any pixel position; based on the prediction result, a motion region in the pulse sequence signal is determined.
In some embodiments, predicting whether each pixel position in the region to be identified is a motion position based on a latest pulse interval length and a historical pulse interval length of each pixel position in the region to be identified, to obtain a prediction result, including: constructing a plurality of finite state automata which are determined in one-to-one correspondence with each pixel position in the region to be identified, wherein a state transfer function and an implicit state are prestored in the finite state automata, and the implicit state is the interval length of a preset number of historical pulses corresponding to the pixel position; acquiring the latest pulse interval length of each pixel position in the region to be identified, and correspondingly inputting a plurality of definite finite state automata; comparing the latest pulse interval length and the historical pulse interval length respectively by using a plurality of finite state automata, and predicting whether the corresponding pixel position is a motion position according to the state transfer function to obtain the output results of the finite state automata; the prediction result is determined based on the output results of the plurality of deterministic finite state automata.
In some embodiments, determining the predicted outcome based on the output outcomes of the plurality of determined finite state automata comprises: and respectively acquiring output results of a plurality of deterministic finite state automata in a preset neighborhood corresponding to each pixel position in the region to be identified, and determining whether the pixel position is a motion position or not based on the acquired output results to obtain a prediction result.
In some embodiments, before acquiring the latest pulse interval length for each pixel location in the area to be identified, the method further comprises: if the lengths of two continuous pulse intervals in the pulse sequence signal are smaller than a first preset threshold value, determining the pulses corresponding to the two continuous pulse intervals as false pulses, and sending alarm information; two consecutive pulse intervals are spliced to a new pulse interval.
In some embodiments, before acquiring the latest pulse interval length for each pixel location in the area to be identified, the method further comprises: if the pulse sequence signal has an isolated pulse interval with the length larger than a second preset threshold value, determining that the pulse sequence signal has a pulse loss phenomenon, and sending alarm information; the isolated pulse interval is split into two new pulse intervals.
According to still another aspect of the embodiments of the present disclosure, there is provided a license plate detection apparatus including: a signal acquisition unit configured to acquire a pulse sequence signal of a monitoring area; the area identification unit is configured to determine a motion area and a static area in the pulse sequence signal, and perform binarization processing on the pulse sequence signal according to the motion area and the static area to obtain binarized data, wherein the motion area represents a pixel area corresponding to a vehicle in the monitoring area in the pulse sequence signal; and an information detection unit configured to identify license plate information of the vehicle based on the binarized data.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including: the device comprises a processor, a memory in communication connection with the processor, and a license plate detection device in the embodiment; the memory stores computer-executable instructions; the processor executes the computer-executed instructions stored in the memory to control the license plate detection device to implement the method.
In some embodiments, the electronic device includes any one of: pulse cameras, high-speed cameras, vision cameras, audio players, video players, navigation devices, fixed position terminals, entertainment units, smartphones, communication devices, mobile devices, devices in motor vehicles, vehicle cameras, cell phone cameras, sports or wearable cameras, traffic cameras, industrial detection cameras, cameras mounted on flyable objects, medical cameras, security cameras, or household appliance cameras.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for implementing the above method.
According to a further aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described method.
According to the license plate detection method, device, equipment, medium and program product provided by the embodiment of the disclosure, the moving area and the stationary area in the pulse sequence signal of the monitoring area can be identified, the binary data are formed according to the identification result, and then the license plate information of the vehicle is identified based on the binary data, so that image reconstruction of all pulse sequence signals is avoided, consumption and time delay of calculation resources are reduced, and the license plate detection efficiency and accuracy are improved.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of one embodiment of a license plate detection method of the present disclosure;
FIG. 2 is a flow chart of identifying license plates in one embodiment of a license plate detection method of the present disclosure;
FIG. 3 is a flowchart illustrating a license plate recognition process according to another embodiment of the license plate detection method of the present disclosure;
FIG. 4 is a flow chart illustrating the identification of license plates in yet another embodiment of the license plate detection method of the present disclosure;
FIG. 5 is a flow chart of identifying a movement region in one embodiment of a license plate detection method of the present disclosure;
FIG. 6 is a flowchart illustrating whether the predicted pixel position is a motion position according to an embodiment of the license plate detection method of the present disclosure;
FIG. 7 is a schematic diagram illustrating a license plate detection device according to an embodiment of the present disclosure;
FIG. 8 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a pulse camera according to an embodiment of the disclosure.
Detailed Description
Example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, such as a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure are applicable to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, or server, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks may be performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
The license plate detection method of the present disclosure is exemplarily described below with reference to fig. 1. Fig. 1 shows a flowchart of one embodiment of a license plate detection method of the present disclosure, as shown in fig. 1, including the following steps.
Step 110, acquiring a pulse sequence signal of the monitoring area.
At a certain moment, the front-end sensor of the pulse camera can collect optical signals in the monitoring area, convert the optical signals into pulse signals, and then form a two-dimensional pulse array according to the spatial position of each pulse signal, so that pulse data at the moment can be obtained. And continuously acquiring by using a pulse camera, and arranging pulse arrays at a plurality of moments according to a time sequence to obtain a three-dimensional pulse sequence signal.
In this embodiment, the monitoring area may represent an area through which the vehicle passes, and may be, for example, a block or a lane to be monitored.
As an example, a pulse camera may be deployed in advance at a preset position of the monitoring area so as to acquire a pulse sequence signal of the monitoring area, and then the pulse sequence signal is acquired from the pulse camera using an execution subject (which may be a terminal device or a server, for example).
And 120, determining a motion area and a static area in the pulse sequence signal, and performing binarization processing on the pulse sequence signal according to the motion area and the static area to obtain binarized data.
The motion area represents a pixel area corresponding to the vehicle in the monitoring area in the pulse sequence signal.
In this embodiment, the pulse sequence signal may be simply binarized according to the motion region and the still region, for example, the pixel position in the motion region may be marked with 1, and the pixel value in the still region may be marked with 0, to obtain binarized data. In this way, the binarized data can represent whether the pixel position in the pulse sequence signal is a motion position or not through two numerical values, and the data form can be an image or a matrix.
It can be seen that the binarization processing in the present embodiment is a simple processing of the pulse train signal according to whether or not the pixel position is a motion position, rather than performing image reconstruction of the pulse train signal using the pulse train of the pixel position. Therefore, even if the binarized data is in the form of a binary image, it is only possible to simply characterize whether each pixel position is a motion position, but it is impossible to characterize the spatiotemporal characteristics of each pixel position.
In a specific example, a plurality of pulse interval lengths corresponding to respective pixel positions in the pulse sequence signal may be determined first according to the pulse array at respective times in the pulse sequence signal, where the pulse interval lengths represent interval times between two adjacent pulses. When there is a trend of variation in the pulse interval lengths corresponding to a certain pixel position, for example, the pulse interval length may be gradually increased or decreased, and there is a variation in the optical signal representing the pixel position, which is caused when the vehicle passes through the region corresponding to the pixel position in the real space, whereby the pixel position can be determined as a moving position. Conversely, if the pulse interval length remains unchanged, the pixel position is indicated as a rest position, and it is noted that, for a non-textured area in the monitored area (e.g., a solid-color area in a vehicle), the pulse interval length is unchanged, and thus the pixel position in the non-textured area is also identified as a rest area. After the identification result of each pixel position is obtained, the motion area and the static area in the pulse sequence signal can be determined. Then, the pixel position of the motion area can be marked as 1, and the pixel position of the static area can be marked as 0, so that the binarized data can be obtained.
And 130, identifying license plate information of the vehicle based on the binarized data.
In general, the outline of the license plate and the license plate number have obvious texture features, and the ground color of the license plate is a non-texture region, so in the binarized data formed in step 120, the outline of the license plate and the license plate number are identified as a motion region, and the ground color region of the license plate is identified as a static region, so that the outline of the license plate in the binarized data and the numbers and letters in the license plate can generally have clear boundaries and outlines, and at this time, the binarized data can be directly identified to determine license plate information of the vehicle.
As an example, the binarized data may be identified using a pre-trained image recognition model to determine license plate information of the vehicle.
According to the license plate detection method, the moving area and the static area in the pulse sequence signal of the monitoring area can be identified, binary data are formed according to the identification result, and then license plate information of a vehicle is identified based on the binary data, so that image reconstruction of all pulse sequence signals is avoided, consumption and time delay of calculation resources are reduced, and the license plate detection efficiency and accuracy are improved.
In some optional implementations of this embodiment, the foregoing step 120 may further employ the following manner: in response to the presence of a motion region in the pulse sequence signal, binarized data is generated from the motion region and the stationary region.
It will be appreciated that when no vehicle passes through the monitoring area, no movement area will occur in the pulse sequence signal, and no license plate detection is required.
In view of this, in the present embodiment, the binarized data is generated only when a moving region appears in the pulse sequence signal, automatic detection can be realized, and invalid operation when no vehicle passes through in the monitoring region is avoided, so that resource waste is avoided.
In some optional implementations of this embodiment, the foregoing step 130 may further employ a flow shown in fig. 2, and as shown in fig. 2, the flow includes the following steps:
step 210, determining a target area in the pulse sequence signal based on the binarized data.
The target area at least comprises a coverage area of a license plate of the vehicle in the pulse sequence signal.
In this embodiment, the target region may be determined from the pulse sequence signal based on the binarized data, and as an example, the entire motion region may be taken as the target region; alternatively, when the vehicle contour and boundary in the binarized data are sufficiently clear, the vehicle coverage area may be regarded as the target area; alternatively, the license plate region of the vehicle may be set as the target region.
Step 220, extracting a target pulse sequence signal of the pixels in the target area from the pulse sequence signal.
In this embodiment, after the target area is determined, the pulse sequence of each pixel position in the target area may be extracted, and thus the target pulse sequence signal may be obtained.
Step 230, extracting characteristic information from the target pulse sequence signal by using a pre-trained encoder.
As an example, the encoder may be a impulse neural network (Spiking Neural Network, SNN) or a 3D convolutional neural network (3D Convolution Neural Network,3DCNN).
And 240, decoding the characteristic information by using a pre-trained decoder to obtain license plate information.
As an example, the decoder may be a multi-layer perceptron (Multilayer Perceptron, MLP) or a 2D convolutional neural network (2D Convolution Neural Network,2DCNN).
In this embodiment, the target area may be determined using the binarized data, and then the pulse sequence signal in the target area is encoded and decoded to identify the license plate information of the vehicle, without performing image reconstruction on the pulse sequence signal, so that the consumption of operation resources caused by image reconstruction may be avoided, and the license plate detection efficiency may be improved.
Referring to fig. 3 on the basis of fig. 1, fig. 3 shows a flowchart for identifying a license plate in an embodiment of the license plate detection method of the present disclosure, and as shown in fig. 3, the above step 130 may further include the following steps.
Step 310, determining a mask area of the vehicle based on the binarized data.
In this embodiment, the motion area in the binary data includes an outer ring contour of the vehicle and a textured area inside the vehicle contour, and the binary data is identified by using a pre-trained image identification model, so that a coverage area of the vehicle in the binary data can be identified, and a mask area of the vehicle can be determined.
Step 320, extracting a first pulse sequence signal corresponding to the pixel position in the mask region from the pulse sequence signal.
And 330, performing image reconstruction on the first pulse sequence signal to obtain a vehicle image.
As an example, the image reconstruction may be performed on the first pulse sequence signal by TFI (text from interval, text in interval), TFP (texturefrom playback, play texture), a deep learning model, or the like, to obtain a vehicle image.
And 340, identifying the vehicle image and determining license plate information.
As an example, the license plate information may be obtained by performing recognition processing on the vehicle image using a pre-trained image recognition model, and recognizing the license plate number of the vehicle therefrom.
In the embodiment shown in fig. 3, the reconstruction is performed only on the first pulse sequence signal in the mask area of the vehicle, instead of the reconstruction is performed on all the pulse sequence signals, so that the consumption of computing resources can be reduced, the computing efficiency and the time delay can be improved, and the vehicle image obtained by reconstruction is recognized to obtain license plate information, so that the recognition accuracy can be improved.
Referring next to fig. 4, fig. 4 illustrates a flowchart of identifying a license plate in one embodiment of the license plate detection method of the present disclosure, and as illustrated in fig. 4, the above-described step 130 may include the following steps.
Step 410, determining a license plate region of the vehicle based on the binarized data.
In this embodiment, the motion area in the binary data includes the outline of the license plate, and the binary data is identified by using a pre-trained image identification model, so that the coverage area of the license plate of the vehicle in the binary data can be identified, and the license plate area can be obtained.
And 420, extracting a second pulse sequence signal corresponding to the pixel position in the license plate region from the pulse sequence signal.
And 430, performing image reconstruction on the second pulse sequence signal to obtain a license plate image.
Step 440, identifying license plate image and determining license plate information.
As an example, license plate information may be identified by performing a recognition process on the license plate image using a pre-trained image recognition model.
In the embodiment shown in fig. 4, the license plate region can be identified from the binarized data, and then, only the second pulse sequence signal in the license plate region is subjected to image reconstruction to obtain a license plate image, so that the consumption of operation resources can be further reduced, and further improvement of operation efficiency and identification accuracy is facilitated.
Referring next to fig. 5, fig. 5 illustrates a flow chart of identifying a movement region in one embodiment of a license plate detection method of the present disclosure, as shown in fig. 5, the flow comprising the following steps.
Step 510, determining a region to be identified in a pixel region of the pulse sequence signal.
In this embodiment, the area to be identified represents an area where a license plate of a vehicle may appear, and may be generally set according to camera parameters and actual requirements of the pulse camera.
For example, a vehicle running on a road may be monitored by using a pulse camera, and the field of view of the pulse camera is assumed to include the entire road area, and the monitored area of the pulse camera may be the entire road area or a partial area of the road area, for example, may be a middle lane of the road, where a coverage area corresponding to the middle lane in the pulse sequence signal may be determined as the area to be identified; or, images of a plurality of intermediate lanes can be obtained, statistical analysis is carried out on license plate positions in the images, a coverage area of the license plate in the intermediate lane is determined, and then the coverage area is mapped into a pixel area of a pulse sequence signal to obtain an area to be identified.
Step 520, predicting whether each pixel position in the region to be identified is a motion position based on the latest pulse interval length and the historical pulse interval length of each pixel position in the region to be identified, and obtaining a prediction result.
The latest pulse interval length represents the interval time between the last received pulse and the last pulse at any pixel position, and the historical pulse interval length represents the interval time between two adjacent historical pulses received at any pixel position.
In this embodiment, by comparing the latest pulse interval length with the history pulse interval length, it is possible to predict whether the pixel position is a motion position. For example, if the difference between the latest pulse interval length and the historical pulse interval length of any pixel position exceeds a preset difference threshold, the pixel position is indicated to be a motion position; otherwise, if the difference between the latest pulse interval length and the historical pulse interval length of the pixel position does not exceed the preset difference threshold, the pixel position is indicated to be at rest. And traversing the positions of all pixels in the region to be identified to obtain a prediction result.
Step 530, determining a motion area in the pulse sequence signal based on the prediction result.
In the present embodiment, an area composed of pixel positions at a motion position in the prediction result may be determined as a motion area.
In the embodiment shown in fig. 5, by comparing the historical pulse interval length with the latest pulse interval length and predicting whether the pixel position is a motion position, the motion region in the pulse sequence signal can be quickly and accurately determined, which helps to provide operation efficiency and accuracy.
Referring next to fig. 6, fig. 6 shows a flowchart of predicting whether a pixel position is a motion position in one embodiment of the license plate detection method of the present disclosure, as shown in fig. 6, the flowchart including the following steps.
Step 610, constructing a plurality of finite state automata which are determined in one-to-one correspondence with each pixel position in the area to be identified.
The finite state automaton is characterized in that a state transfer function and an implicit state are prestored in the finite state automaton, and the implicit state is a preset number of historical pulse interval lengths corresponding to pixel positions.
In this embodiment, each pixel position in the area to be identified corresponds to a deterministic finite state automaton, the implicit state of the deterministic finite state automaton is a preset number of historical pulse interval lengths corresponding to the pixel position, and the result output by the deterministic finite state automaton is used to represent whether the pixel position is a motion position or not.
The number of history pulse interval lengths may be set according to the operational performance of the execution subject, camera parameters (e.g., aperture size).
Alternatively, the deterministic finite state automata corresponding to different pixel locations may have different state transfer functions and numbers of historical pulse interval lengths. Therefore, on the premise of allowable operation performance, the matching degree of the finite state automaton and the position can be improved, and the prediction accuracy can be improved.
Step 620, obtaining the latest pulse interval length of each pixel position in the area to be identified, and correspondingly inputting a plurality of definite finite state automata.
Step 630, comparing the latest pulse interval length and the historical pulse interval length by using a plurality of finite state automata, and predicting whether the corresponding pixel position is a motion position according to the state transfer function to obtain the output results of the finite state automata.
Step 640, determining a prediction result based on the output results of the plurality of finite state automata.
In this embodiment, the output result of each finite state automaton represents whether the corresponding pixel position is a motion position, and the set of the output results of the finite state automaton is the prediction result of the region to be identified.
In a specific example, the region to be identified includes pixels A, B, C, and finite state automata a, b, c are respectively and correspondingly determined, and the finite state automata a is determined to have a history pulse interval length of m pixels a and a state transfer function in advancef 1 Determining the historical pulse interval length of n pixels B pre-stored in the finite state automaton B and a state transfer functionf 2 Determining the length of the historical pulse interval of the finite state automaton C prestored with s pixels C and the state transfer functionf 3 Wherein m, n and s are super parameters, and specific numerical values thereof can be the same or different. When a new pulse is received by pixel A, B, C, the latest pulse interval length of pixel A, B, C can be determined based on the interval time between the received pulse and the last pulse, which can be expressed ast a 、t b 、t c . Thereafter, willt a 、t b 、t c And respectively inputting the finite state automata a, b and c, determining whether the pixel A, B, C is a motion position according to the output results of the finite state automata a, b and c, and combining the output results of the finite state automata a, b and c to obtain the prediction result of the region to be identified.
Alternatively, the output of the finite state automaton may be {0,1}, where 1 may represent the pixel position as a motion position and 0 represents the pixel position as a rest position. In this way, the prediction result of the region to be identified can be a matrix formed by 0 and 1, and the binarization data can be generated according to the prediction result only by marking the pixel position according to the prediction result, which is beneficial to further improving the generation efficiency of the binarization data.
In some alternative implementations of the present embodiment, the prediction result may be determined by: and respectively acquiring output results of a plurality of deterministic finite state automata in a preset neighborhood corresponding to each pixel position in the region to be identified, and determining whether the pixel position is a motion position or not based on the acquired output results to obtain a prediction result.
As an example, if the output result of the deterministic finite state automaton corresponding to the pixel position is a rest position and the output result of the 8 deterministic finite state automatons around the pixel position is a motion position, the pixel position may be determined as the motion position. And combining the judgment results of all pixel positions to obtain a prediction result.
In this embodiment, a pixel position may be predicted by using a plurality of adjacent finite state automata to determine whether the pixel position is a motion position, which may improve the ability of the finite state automata to identify an error signal and help to reduce the false judgment rate.
In the embodiment shown in fig. 6, the last pulse interval length of the pixel position and the preset number of historical pulse interval lengths can be compared by using the finite state automaton, and whether the pixel position is a motion position or not is predicted based on the state transfer function, so that whether the pixel position is a motion position or not can be accurately and efficiently predicted, and further improvement of operation efficiency and accuracy are facilitated.
In some alternative implementations of the embodiment shown in fig. 6, prior to step 620, the process may further include the steps of: if the lengths of two continuous pulse intervals in the pulse sequence signal are smaller than a first preset threshold value, determining the pulses corresponding to the two continuous pulse intervals as false pulses, and sending alarm information; two consecutive pulse intervals are spliced to a new pulse interval.
In practice, when the pulse camera reads the integrator by zero setting and polling mechanism, clock skew occurs or when external random disturbance causes bit overturn, two continuous pulses appear suddenly in the pulse sequence signal or one pulse is inserted in a stable pulse sequence, so that the length of the interval between the two continuous pulse sequences is reduced suddenly, and then the pulse sequence is restored to the original value, namely false pulses are generated.
In this embodiment, the first preset threshold may be determined according to a preset scaling factor and an interval time between pulse signals normally collected by the pulse camera. As an example, assuming that the interval time between pulse signals normally collected by the pulse camera is 10ms and the proportionality coefficient may be 0.6, the first preset threshold is 6ms. When two continuous pulse intervals with the lengths of 0.4ms and 0.5ms respectively appear in the pulse sequence signal, determining 3 pulses corresponding to the two continuous pulse intervals as false pulses and sending alarm information; at the same time, the two consecutive pulse intervals can be spliced to a new pulse interval of length 0.9 ms.
In order to avoid the interference of the false pulse on the prediction result, in this embodiment, the length of the first preset threshold and the length of two continuous pulse intervals may be compared, the false pulse in the pulse sequence signal may be identified, and the two continuous pulse intervals corresponding to the false pulse may be spliced into a new relatively reasonable pulse interval, so as to ensure the reliability of the latest pulse interval length of the input determination finite state automaton. Meanwhile, alarm information can be sent to prompt staff to check the pulse camera in time.
In some alternative implementations of the embodiment shown in fig. 6, the process may further include the following steps, prior to step 620: if the pulse sequence signal has an isolated pulse interval with the length larger than a second preset threshold value, determining that the pulse sequence signal has a pulse loss phenomenon, and sending alarm information; the isolated pulse interval is split into two new pulse intervals.
In practice, when the external random disturbance causes bit inversion, the pulse interval may be suddenly lengthened, and if this phenomenon occurs in isolation, the phenomenon that the pulse camera may lose the pulse may occur. If this occurs continuously, this indicates that the pulse camera may malfunction.
In this embodiment, the second preset threshold may be determined according to an interval time between the preset multiple and the pulse signal normally collected by the pulse camera, for example, the second preset threshold may be twice the normal interval time. The length of the new pulse interval may be equal or unequal.
Alternatively, the length of the new pulse interval may be determined from the historical pulse interval length such that the length of the new pulse interval may be consistent with the historical pulse interval length.
As an example, assuming that the interval time between pulse signals normally collected by the pulse camera is 5ms, the second preset threshold may be 10ms. When a pulse interval with the length of 11ms appears in the pulse sequence signal and the length of the front pulse interval and the rear pulse interval of the pulse interval is not more than 10ms, the phenomenon that the pulse sequence signal is lost can be determined, and at the moment, alarm information can be sent; at the same time, the pulse interval is split into two new pulse intervals of length 5.5 ms.
In this embodiment, the pulse loss phenomenon in the pulse sequence signal can be identified, and the pulse interval caused by the pulse loss is split into two new pulse intervals, so as to ensure the reliability of the latest pulse interval length of the input and determination finite state automaton, and avoid the interference of noise data caused by the pulse loss on the prediction result. Meanwhile, alarm information can be sent to prompt staff to check the pulse camera in time.
Fig. 7 is a schematic structural diagram of an embodiment of a license plate detection device of the present disclosure, where the device shown in fig. 7 includes: a signal acquisition unit 710 configured to acquire a pulse sequence signal of a monitoring area; the region identifying unit 720 is configured to determine a motion region and a stationary region in the pulse sequence signal, and perform binarization processing on the pulse sequence signal according to the motion region and the stationary region to obtain binarized data, wherein the motion region represents a pixel region corresponding to a vehicle in the monitored region in the pulse sequence signal; the information detection unit 730 is configured to identify license plate information of the vehicle based on the binarized data.
In one embodiment, the information detection unit 730 includes: a target area module configured to determine a target area in the pulse sequence signal based on the binarized data, the target area including at least a coverage area of a license plate of the vehicle in the pulse sequence signal; a signal extraction module configured to extract a target pulse sequence signal of a pixel within a target region from the pulse sequence signal; an encoder module configured to extract feature information from the target pulse sequence signal using a pre-trained encoder; and the decoder module is configured to decode the characteristic information by utilizing a pre-trained decoder to obtain license plate information.
In one embodiment, the information detection unit 730 includes: a first segmentation module configured to determine a mask region of the vehicle based on the binarized data; the first extraction module is configured to extract a first pulse sequence signal corresponding to the pixel position in the mask area from the pulse sequence signal; the first reconstruction module is configured to reconstruct the image of the first pulse sequence signal to obtain a vehicle image; the first recognition module is configured to recognize the vehicle image and determine license plate information.
In one embodiment, the information detection unit 730 includes: a second segmentation module configured to determine a license plate region of the vehicle based on the binarized data; the second extraction module is configured to extract a second pulse sequence signal corresponding to the pixel position in the license plate region from the pulse sequence signal; the second reconstruction module is configured to reconstruct the image of the second pulse sequence signal to obtain a license plate image; and the second recognition module is configured to recognize the license plate image and determine license plate information.
In one embodiment, the region identification unit 720 is further configured to: and in response to the existence of the motion region in the pulse sequence signal, performing binarization processing on the pulse sequence signal according to the motion region and the static region to generate binarized data.
In one embodiment, the region identification unit 720 includes: a region determining module configured to determine a region to be identified in a pixel region of the pulse sequence signal; the position prediction module is configured to predict whether each pixel position in the region to be recognized is a motion position or not based on the latest pulse interval length and the historical pulse interval length of each pixel position in the region to be recognized, and obtain a prediction result, wherein the latest pulse interval length represents the interval time between the last received pulse and the last pulse of any pixel position, and the historical pulse interval length represents the interval time between two adjacent historical pulses received by any pixel position; and a result determination module configured to determine a motion region in the pulse sequence signal based on the prediction result.
In one embodiment, the location prediction module further comprises: the automaton construction submodule is configured to construct a plurality of finite state automatons corresponding to pixel positions in the region to be identified one by one, state transfer functions and hidden states are prestored in the finite state automatons, and the hidden states are the preset number of historical pulse interval lengths corresponding to the pixel positions; the acquisition submodule is configured to acquire the latest pulse interval length of each pixel position in the region to be identified and correspondingly input a plurality of definite finite state automata; the comparison sub-module is configured to respectively compare the latest pulse interval length and the historical pulse interval length by utilizing a plurality of deterministic finite state automata, predict whether the corresponding pixel position is a motion position according to the respective state transfer function, and obtain the output results of the deterministic finite state automata; and a result determination sub-module configured to determine a predicted result based on the output results of the plurality of determined finite state automata.
In one embodiment, the result determination submodule is further configured to: and respectively acquiring output results of a plurality of deterministic finite state automata in a preset neighborhood corresponding to each pixel position in the region to be identified, and determining whether the pixel position is a motion position or not based on the acquired output results to obtain a prediction result.
In one embodiment, the position prediction module further includes a first noise reduction sub-module configured to: if the lengths of two continuous pulse intervals in the pulse sequence signal are smaller than a first preset threshold value, determining the pulses corresponding to the two continuous pulse intervals as false pulses, and sending alarm information; two consecutive pulse intervals are spliced to a new pulse interval.
In one embodiment, the position prediction module further includes a second noise reduction sub-module configured to: if the pulse sequence signal has an isolated pulse interval with the length larger than a second preset threshold value, determining that the pulse sequence signal has a pulse loss phenomenon, and sending alarm information; the isolated pulse interval is split into two new pulse intervals.
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 8. The electronic device may be either or both of the first device and the second device, or a stand-alone device independent thereof, which may communicate with the first device and the second device to receive the acquired input signals therefrom.
Fig. 8 illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 8, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device to perform the desired functions.
The memory may store one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program products may be stored on the computer readable storage medium that can be run by a processor to implement the license plate detection methods and/or other desired functions of the various embodiments of the present disclosure described above.
In one example, the electronic device may further include: input devices and output devices, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
In addition, the input device may include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, etc., to the outside. The output device may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 8, components such as buses, input/output interfaces, and the like are omitted for simplicity. In addition, the electronic device may include any other suitable components depending on the particular application.
In another embodiment of the present application, the electronic device may include a pulse signal readout circuit, and/or include a pixel cell array circuit, and/or a chip having the above-described pixel cell array circuit.
Specifically, the apparatus includes at least one of: cameras, audio/video players, navigation devices, fixed location terminals, entertainment devices, smartphones, communication devices, mobile devices, vehicles or facilities, industrial devices, medical devices, security devices, flight devices, home appliances.
In embodiments of the present application, cameras include, but are not limited to, pulse cameras, high speed cameras, industrial inspection cameras, and the like. Cameras include, but are not limited to: vehicle-mounted camera, mobile phone camera, traffic camera, install camera, medical camera, security protection camera or household electrical appliances camera on can flying object.
Taking a pulse camera as an example, the device provided by the embodiment of the application is described in detail. Fig. 9 is a schematic structural diagram of a pulse camera according to an embodiment of the present application. As shown in fig. 9, the pulse camera includes: lens 901, pulse signal circuit 902, data processing circuit 903, nonvolatile memory 904, power supply circuit 905, volatile memory 906, control circuit 907, and I/O interface 908.
Wherein the lens 901 is for receiving incident light from a subject, i.e., an optical signal.
A pulse signal circuit 902 for converting the optical signal received through the lens 901 into an electrical signal and generating a pulse signal from the electrical signal. The pulse signal circuit 902 includes, for example, the pulse signal readout circuit described above, and/or the pixel cell array circuit described above, and/or a chip having the pixel cell array circuit described above.
The data processing circuit 903 is configured to control a pulse signal reading process, and the data processing circuit 903 includes, for example: an arithmetic processing unit (e.g., CPU) and/or an image processing unit (GPU), for example, controls a pulse signal readout process of the pulse signal readout circuit, controls a readout row selector therein to transmit a row readout signal, resets the row selector to transmit a column reset signal, and the like.
906 is a volatile memory, such as a Random Access Memory (RAM), 904 is a nonvolatile memory device, such as a Solid State Disk (SSD), a Hybrid Hard Disk (HHD), a Secure Digital (SD) card, a mini SD card, or the like.
In an embodiment of the present invention, the pulse camera further includes: and the display unit is used for carrying out real-time/playback display on the pulse signal/image information. The pulse camera according to the embodiment of the present invention may further include at least one of the following: wired/wireless transmission interfaces, such as WiFi interfaces, bluetooth interfaces, usb interfaces, RJ45 interfaces, mobile Industry Processor Interfaces (MIPI) interfaces, low Voltage Differential Signaling (LVDS) interfaces, and other interfaces with wired or wireless transmission capabilities.
The pulse camera provided by the embodiment of the invention can be used for detecting visible light, infrared light, ultraviolet light, X rays and the like, and can be applied to various scenes, and common scenes comprise but are not limited to:
the camera can be used as a vehicle-mounted camera to be installed in various vehicles or facilities, for example, used for information acquisition and control of vehicle-road coordination, intelligent traffic and automatic driving. For example, as a high-speed rail travel recorder installed in a rail vehicle such as a high-speed rail or on a rail traffic line; it may also be installed in an autonomous vehicle or a vehicle equipped with an Advanced Driving Assistance System (ADAS), for example, to detect and alert information of a vehicle, a pedestrian, a lane, a driver, or the like.
The camera can be used as a traffic camera to be installed on a traffic signal rod for shooting, early warning, cooperative control and the like of vehicles and pedestrians on urban roads and expressways.
Can be used as an industrial detection camera, for example, installed on a high-speed railway traffic line for high-speed railway line patrol and for high-speed railway safety detection; the method can also be used for detection, early warning and the like of specific industrial scenes such as coal mine conveyor belt fracture detection, substation arc detection, real-time detection of wind power generation blades, high-speed turbine non-stop detection and the like.
Is mounted on a flyable object, such as an airplane, satellite or the like, and is used for high-definition imaging of the object in a high-speed flight or even high-speed rotation scene.
Industry (machine vision in smart manufacturing, etc.), civilian (judicial evidence, sports penalties, etc.), and consumer electronics (cameras, video media, etc.).
Can be used as a medical camera for high-definition medical imaging in clinical diagnosis and treatment such as medical treatment, beauty treatment, health care and the like.
The camera can be used as a sports camera or a wearable camera, for example, a head-mounted camera or a camera embedded in a wristwatch, and can be used for shooting scenes of various sports fields, daily leisure sports and the like.
The camera can also be used as a security camera, a mobile phone camera or a household appliance camera and the like.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a license plate detection method according to the various embodiments of the present disclosure described in the "exemplary methods" section of the present description.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the license plate detection method according to the various embodiments of the present disclosure described in the above section of the present description.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Further, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the license plate detection method according to various embodiments of the present disclosure described in the above section of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Further, in one of the embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the method according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
Claims (13)
1. A license plate detection method, comprising:
acquiring a pulse sequence signal of a monitoring area;
determining a motion area and a static area in the pulse sequence signal, and carrying out binarization processing on the pulse sequence signal according to the motion area and the static area to obtain binarization data, wherein the motion area represents a pixel area corresponding to a vehicle in the monitoring area in the pulse sequence signal; identifying license plate information of the vehicle based on the binarized data;
determining a motion region in the pulse sequence signal, comprising:
determining a region to be identified in a pixel region of the pulse sequence signal;
predicting whether each pixel position in the region to be identified is a motion position based on the latest pulse interval length and the historical pulse interval length of each pixel position in the region to be identified, and obtaining a prediction result, wherein the prediction result comprises the following steps:
Constructing a plurality of finite state automata which are in one-to-one correspondence with each pixel position in the region to be identified, wherein state transfer functions and hidden states are prestored in the finite state automata, and the hidden states are the preset number of historical pulse interval lengths corresponding to the pixel positions;
acquiring the latest pulse interval length of each pixel position in the region to be identified, and correspondingly inputting the plurality of finite state automata;
obtaining output results of the plurality of deterministic finite state automata;
determining the prediction result based on the output results of the plurality of finite state automata;
and determining a motion region in the pulse sequence signal based on the prediction result.
2. The method of claim 1, wherein identifying license plate information of the vehicle based on the binarized data comprises:
determining a target area in the pulse sequence signal based on the binarized data, wherein the target area at least comprises a coverage area of a license plate of the vehicle in the pulse sequence signal;
extracting a target pulse sequence signal of pixels in the target area from the pulse sequence signal;
Extracting feature information from the target pulse sequence signal by using a pre-trained encoder;
and decoding the characteristic information by using a pre-trained decoder to obtain the license plate information.
3. The method of claim 1, wherein identifying license plate information of the vehicle based on the binarized data comprises:
determining a masking region of the vehicle based on the binarized data;
extracting a first pulse sequence signal of a pixel corresponding to the mask region from the pulse sequence signal;
performing image reconstruction on the first pulse sequence signal to obtain a vehicle image;
and identifying the vehicle image and determining the license plate information.
4. The method of claim 1, wherein identifying license plate information of the vehicle based on the binarized data comprises:
determining a license plate region of the vehicle based on the binarized data;
extracting a second pulse sequence signal of a pixel corresponding to the license plate region from the pulse sequence signal;
performing image reconstruction on the second pulse sequence signal to obtain a license plate image;
and identifying the license plate image and determining the license plate information.
5. The method of claim 1, wherein determining a motion region and a rest region in the pulse sequence signal, and performing binarization processing on the pulse sequence signal according to the motion region and the rest region, to obtain binarized data, comprises:
and responding to the existence of a motion area of the pulse sequence signal, and performing binarization processing on the pulse sequence signal according to the motion area and the static area to generate the binarized data.
6. A method according to any one of claims 1 to 5, wherein the latest pulse interval length represents an interval time between a last received pulse and a last pulse at any one pixel position, and the history pulse interval length represents an interval time between two adjacent history pulses received at any one pixel position;
obtaining output results of the plurality of deterministic finite state automata, including:
and respectively comparing the latest pulse interval length and the historical pulse interval length by using the plurality of finite state automata, and predicting whether the corresponding pixel position is a motion position according to the state transfer function to obtain the output result of the finite state automata.
7. The method of claim 6, wherein determining the predicted outcome based on the output outcome of the plurality of deterministic finite state automata comprises:
and respectively acquiring output results of a plurality of deterministic finite state automata in a preset neighborhood corresponding to each pixel position in the region to be identified, and determining whether the pixel position is a motion position or not based on the acquired output results to acquire the prediction result.
8. The method of claim 6, wherein prior to obtaining the latest pulse interval length for each pixel location in the area to be identified, the method further comprises:
if the lengths of two continuous pulse intervals in the pulse sequence signal are smaller than a first preset threshold value, determining the pulses corresponding to the two continuous pulse intervals as false pulses, and sending alarm information;
and splicing the two continuous pulse intervals into a new pulse interval.
9. The method of claim 6, wherein prior to obtaining the latest pulse interval length for each pixel location in the area to be identified, the method further comprises:
if the pulse sequence signal has an isolated pulse interval with the length larger than a second preset threshold value, determining that the pulse sequence signal has a phenomenon of pulse loss, and sending alarm information;
Splitting the isolated pulse interval into two new pulse intervals.
10. A license plate detection device, comprising:
a signal acquisition unit configured to acquire a pulse sequence signal of a monitoring area;
the area identification unit is configured to determine a motion area and a static area in the pulse sequence signal, and perform binarization processing on the pulse sequence signal according to the motion area and the static area to obtain binarized data, wherein the motion area represents a pixel area corresponding to a vehicle in the monitoring area in the pulse sequence signal; an information detection unit configured to identify license plate information of the vehicle based on the binarized data;
the area identifying unit includes:
a region determination module configured to determine a region to be identified in a pixel region of the pulse sequence signal;
a position prediction module configured to predict whether each pixel position in the region to be identified is a motion position based on a latest pulse interval length and a historical pulse interval length of each pixel position in the region to be identified, to obtain a prediction result, including:
the automaton construction submodule is configured to construct a plurality of finite state automatons corresponding to pixel positions in the region to be identified one by one, state transfer functions and hidden states are prestored in the finite state automatons, and the hidden states are a preset number of historical pulse interval lengths corresponding to the pixel positions;
An acquisition sub-module configured to acquire the latest pulse interval length of each pixel position in the region to be identified, and correspondingly input the plurality of deterministic finite state automata;
a comparison sub-module configured to obtain output results of the plurality of deterministic finite state automata;
a result determination sub-module configured to determine the prediction result based on output results of the plurality of determined finite state automata;
and a result determination module configured to determine a motion region in the pulse sequence signal based on the prediction result.
11. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor, further comprising the license plate detection device of claim 10;
the memory stores computer-executable instructions;
the processor executes the computer-executed instructions stored in the memory to control the license plate detection device to implement the license plate detection method of any one of claims 1-9.
12. The device of claim 11, wherein the electronic device comprises any one of: pulse camera, high-speed camera, vision camera, audio player, video player, navigation device, fixed position terminal, entertainment unit, smart phone, communications device, mobile device, devices in motor vehicles, vehicle-mounted camera, cell phone camera, sports or wearable camera, traffic camera, industrial inspection camera, camera mounted on a flyable object, medical camera, security camera, household appliance camera.
13. A computer readable storage medium having stored therein computer executable instructions which when executed cause a computer to perform the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310119014.4A CN116012833B (en) | 2023-02-03 | 2023-02-03 | License plate detection method, device, equipment, medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310119014.4A CN116012833B (en) | 2023-02-03 | 2023-02-03 | License plate detection method, device, equipment, medium and program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116012833A CN116012833A (en) | 2023-04-25 |
CN116012833B true CN116012833B (en) | 2023-10-10 |
Family
ID=86019415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310119014.4A Active CN116012833B (en) | 2023-02-03 | 2023-02-03 | License plate detection method, device, equipment, medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116012833B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116744133A (en) * | 2023-06-07 | 2023-09-12 | 脉冲视觉(北京)科技有限公司 | Shooting terminal and shooting method based on pulse array detection |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738251A (en) * | 2019-10-11 | 2020-01-31 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN114612507A (en) * | 2022-02-28 | 2022-06-10 | 天津大学 | High-speed target tracking method based on pulse sequence type image sensor |
CN114708425A (en) * | 2022-03-17 | 2022-07-05 | 深圳力维智联技术有限公司 | Method and device for identifying vehicle parking violation and computer readable storage medium |
CN115298705A (en) * | 2020-01-10 | 2022-11-04 | 顺丰科技有限公司 | License plate recognition method and device, electronic equipment and storage medium |
CN115388891A (en) * | 2022-08-05 | 2022-11-25 | 鹏城实验室 | Space positioning method and system for large-view-field moving target |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8682036B2 (en) * | 2012-04-06 | 2014-03-25 | Xerox Corporation | System and method for street-parking-vehicle identification through license plate capturing |
-
2023
- 2023-02-03 CN CN202310119014.4A patent/CN116012833B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738251A (en) * | 2019-10-11 | 2020-01-31 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN115298705A (en) * | 2020-01-10 | 2022-11-04 | 顺丰科技有限公司 | License plate recognition method and device, electronic equipment and storage medium |
CN114612507A (en) * | 2022-02-28 | 2022-06-10 | 天津大学 | High-speed target tracking method based on pulse sequence type image sensor |
CN114708425A (en) * | 2022-03-17 | 2022-07-05 | 深圳力维智联技术有限公司 | Method and device for identifying vehicle parking violation and computer readable storage medium |
CN115388891A (en) * | 2022-08-05 | 2022-11-25 | 鹏城实验室 | Space positioning method and system for large-view-field moving target |
Non-Patent Citations (1)
Title |
---|
车牌识别的二值化算法研究;朱东方;《中国优秀硕士学位论文全文数据库 信息科技辑》;第7、41页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116012833A (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Real‐time vehicle detection and counting in complex traffic scenes using background subtraction model with low‐rank decomposition | |
Xu et al. | Explainable object-induced action decision for autonomous vehicles | |
Lin et al. | A Real‐Time Vehicle Counting, Speed Estimation, and Classification System Based on Virtual Detection Zone and YOLO | |
CN114972763B (en) | Laser radar point cloud segmentation method, device, equipment and storage medium | |
WO2020107020A1 (en) | Lidar-based multi-person pose estimation | |
CN111523378B (en) | Human behavior prediction method based on deep learning | |
CN106128121B (en) | Vehicle queue length fast algorithm of detecting based on Local Features Analysis | |
CN116012833B (en) | License plate detection method, device, equipment, medium and program product | |
Lu et al. | Edge compression: An integrated framework for compressive imaging processing on cavs | |
Li et al. | Automatic unusual driving event identification for dependable self-driving | |
Cho et al. | Semantic segmentation with low light images by modified CycleGAN-based image enhancement | |
CN111814548A (en) | Abnormal behavior detection method and device | |
CN113033471A (en) | Traffic abnormality detection method, apparatus, device, storage medium, and program product | |
Carranza-García et al. | Object detection using depth completion and camera-LiDAR fusion for autonomous driving | |
CN113901909A (en) | Video-based target detection method and device, electronic equipment and storage medium | |
Zhou et al. | MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation With Bird's Eye View Based Appearance and Motion Features | |
Cheng et al. | Language-guided 3d object detection in point cloud for autonomous driving | |
CN115731513A (en) | Intelligent park management system based on digital twin | |
CN115410324A (en) | Car as a house night security system and method based on artificial intelligence | |
Ali | Real‐time video anomaly detection for smart surveillance | |
Barbosa et al. | Camera-radar perception for autonomous vehicles and ADAS: Concepts, datasets and metrics | |
Rahman et al. | Predicting driver behaviour at intersections based on driver gaze and traffic light recognition | |
Ailimujiang et al. | A Transformer‐Based Network for Change Detection in Remote Sensing Using Multiscale Difference‐Enhancement | |
Li et al. | GateFormer: Gate Attention UNet With Transformer for Change Detection of Remote Sensing Images | |
Liu et al. | Fusion Attention Mechanism for Foreground Detection Based on Multiscale U‐Net Architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |