CN115690765A - License plate recognition method, license plate recognition device, electronic equipment, readable medium and program product - Google Patents

License plate recognition method, license plate recognition device, electronic equipment, readable medium and program product Download PDF

Info

Publication number
CN115690765A
CN115690765A CN202211291436.1A CN202211291436A CN115690765A CN 115690765 A CN115690765 A CN 115690765A CN 202211291436 A CN202211291436 A CN 202211291436A CN 115690765 A CN115690765 A CN 115690765A
Authority
CN
China
Prior art keywords
license plate
image
information
generate
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211291436.1A
Other languages
Chinese (zh)
Other versions
CN115690765B (en
Inventor
常海峰
陈海峰
刘洋
颜秉彦
汪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongguancun Smart City Co Ltd
Original Assignee
Zhongguancun Smart City Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongguancun Smart City Co Ltd filed Critical Zhongguancun Smart City Co Ltd
Priority to CN202211291436.1A priority Critical patent/CN115690765B/en
Publication of CN115690765A publication Critical patent/CN115690765A/en
Application granted granted Critical
Publication of CN115690765B publication Critical patent/CN115690765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a license plate recognition method, a license plate recognition device, an electronic device, a readable medium and a program product. One embodiment of the method comprises: measuring the distance of a target area through an infrared distance measuring sensor included by the license plate recognition device to generate a distance measuring result; in response to the fact that the measuring result represents that the obstacle exists, a camera included in the license plate recognition device is started to collect images, and a first image is obtained; carrying out license plate region identification on the first image to generate license plate region information; generating a scaling according to the license plate region information and the standard license plate region information; according to the zoom ratio, carrying out focal length zooming on the camera, and controlling the camera after the focal length zooming to carry out image acquisition to obtain a second image set; performing image fusion on a second image in the second image set to generate a fused image; and carrying out license plate recognition on the fused image to generate license plate information. This embodiment improves the success rate and accuracy of vehicle identification.

Description

License plate recognition method, license plate recognition device, electronic equipment, readable medium and program product
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a license plate recognition method, apparatus, electronic device, readable medium, and program product.
Background
The license plate recognition is a technology for automatically recognizing the license plate of the vehicle, and has important practical functions in the fields of vehicle recognition and traffic control. At present, when vehicle identification is performed, the following methods are generally adopted: and through a single license plate recognition model, carrying out image recognition on a single image collected by the fixed-focus camera so as to achieve the purpose of license plate recognition.
However, the inventors have found that when the above-described manner is adopted, there are often technical problems as follows:
firstly, as parking positions of vehicles are often different, license plates in images acquired by a fixed-focus camera are possibly smaller, so that license plate recognition fails;
secondly, due to the influence of environmental factors, the image acquired by the camera is often not clear, so that the accuracy of license plate recognition is influenced;
thirdly, since the types of license plates are various, and the character contents and the character arrangement of the license plates with different license plate matching types are often different, the license plate recognition accuracy cannot be guaranteed by adopting a single license plate recognition model to perform the license plate recognition.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose license plate recognition methods, apparatuses, electronic devices, readable media, and program products to solve one or more of the technical problems noted in the background section above.
In a first aspect, some embodiments of the present disclosure provide a license plate recognition method, including: measuring the distance of a target area through an infrared distance measuring sensor included by the license plate recognition device to generate a distance measuring result, wherein the infrared distance measuring sensor is used for emitting a plurality of coplanar infrared beams; in response to the fact that the measuring result represents that the obstacle exists, starting a camera included in the license plate recognition device to collect images to obtain a first image; carrying out license plate region identification on the first image to generate license plate region information; generating a scaling according to the license plate region information and the standard license plate region information; according to the zoom ratio, zooming the focal length of the camera, and controlling the camera with the zoomed focal length to acquire an image to obtain a second image set; performing image fusion on a second image in the second image set to generate a fused image; and identifying the license plate of the fused image to generate license plate information, wherein the license plate information comprises: license plate number information.
In a second aspect, some embodiments of the present disclosure provide a license plate recognition device, the device including: a distance measuring unit configured to perform distance measurement on a target area by an infrared ranging sensor included in the license plate recognition device to generate a ranging result, wherein the infrared ranging sensor is configured to emit a plurality of coplanar infrared beams; the image acquisition unit is configured to respond to the fact that the measuring result represents that an obstacle exists, and a camera included in the license plate recognition device is started to acquire an image to obtain a first image; a license plate region recognition unit configured to perform license plate region recognition on the first image to generate license plate region information; the generating unit is configured to generate a scaling according to the license plate region information and standard license plate region information; the focal length zooming unit is configured to zoom the focal length of the camera according to the zooming proportion and control the camera with the zoomed focal length to acquire an image to obtain a second image set; an image fusion unit configured to perform image fusion on a second image in the second image set to generate a fused image; and the license plate recognition unit is configured to perform license plate recognition on the fused image so as to generate license plate information.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
In a fifth aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantages: according to the license plate recognition method disclosed by some embodiments, the success rate and the accuracy rate of license plate recognition are ensured. Specifically, the reasons for the low success rate and accuracy of license plate recognition are as follows: firstly, the parking positions of the vehicles are often different, so that the number plate in the image acquired by the fixed-focus camera is possibly smaller, and the number plate identification is failed; secondly, due to the influence of environmental factors, images acquired by the camera are often not clear, and therefore the accuracy of license plate recognition is influenced. Based on this, in the license plate recognition method according to some embodiments of the present disclosure, first, a distance measurement is performed on a target area through an infrared distance measurement sensor included in a license plate recognition device to generate a distance measurement result, where the infrared distance measurement sensor is configured to emit a plurality of coplanar infrared beams. In practical situations, when performing vehicle sensing, the following method is generally adopted: the method comprises the steps of comparing a preset image without a vehicle with a current image to judge whether the vehicle exists or not, or directly carrying out vehicle identification on a collected image. The two modes both need to carry out vehicle identification on the acquired images in real time, and the identification efficiency is low. In addition, an infrared distance measuring sensor capable of emitting single-beam infrared rays can be used for distance measurement. But do not effectively distinguish between the vehicle and other obstacles. Therefore, according to the infrared distance measuring sensor capable of emitting the plurality of coplanar infrared beams, due to the fact that the size of the vehicle is larger than that of other obstacles, whether the obstacle is the vehicle or not can be effectively determined by the aid of the coplanar infrared beams. And secondly, in response to the fact that the measured result represents that the obstacle exists, starting a camera included in the license plate recognition device to collect images to obtain a first image. And when the vehicle exists, starting the camera again to acquire images. Further, license plate region recognition is carried out on the first image to generate license plate region information. Thus, the area where the license plate is located is identified. And then, generating a scaling according to the license plate region information and the standard license plate region information. The parking positions of the vehicles are different, so that the number plate in the acquired image is possibly small, and the focal length of the camera is adjusted by generating a scaling ratio, so that the proper proportion of the number plate in the image is ensured. In addition, according to the zoom ratio, the focal length of the camera is zoomed, and the camera after the focal length is zoomed is controlled to acquire images, so that a second image set is obtained. Then, image fusion is performed on the second images in the second image set to generate a fused image. Due to the influence of environmental factors, images acquired by the camera are often unclear, and based on the problem, the purpose of improving the definition of the images is achieved by acquiring a plurality of images and fusing the images. And finally, carrying out license plate recognition on the fused image to generate license plate information, wherein the license plate information comprises: license plate number information. By the method, the success rate and the accuracy rate of vehicle identification are greatly improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a license plate recognition method according to the present disclosure;
FIG. 2 is a schematic block diagram of some embodiments of a license plate recognition device according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of some embodiments of a license plate recognition method according to the present disclosure is shown. The license plate recognition method comprises the following steps:
step 101, performing distance measurement on a target area through an infrared distance measuring sensor included in the license plate recognition device to generate a distance measuring result.
In some embodiments, the subject (e.g., computing device) performing the license plate recognition method may perform a distance measurement on the target area through an infrared ranging sensor included in the license plate recognition apparatus to generate a ranging result. The license plate recognition device can be a device for recognizing license plates. The above license plate recognition device may include: infrared distance measuring sensor and camera. The infrared distance measuring sensor can emit a plurality of coplanar infrared beams. The target area may be an area where a center line parallel to a plane where the plurality of coplanar infrared beams are located, among the shooting areas corresponding to the cameras. The measurement result represents a plurality of distance measurement values corresponding to a plurality of infrared beams emitted by the infrared distance measuring sensor.
The computing device may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein. Further, there may be any number of computing devices, as desired for an implementation.
And 102, in response to the fact that the measured result represents that the obstacle exists, starting a camera included in the license plate recognition device to collect images to obtain a first image.
In some embodiments, the executing subject may start a camera included in the license plate recognition device to perform image acquisition in response to determining that the measurement result indicates that the obstacle exists, so as to obtain the first image. The first image may be an image of a shooting area acquired by the camera. In practice, when the measurement result indicates that the distance measurement value corresponding to the infrared beam of the target proportion among the plurality of infrared beams emitted by the infrared distance measuring sensor changes, the existence of the vehicle obstacle is indicated. Wherein the target ratio may be 80%. Since the vehicle occupies a larger area than other obstacles, i.e., blocks more infrared beams, it can be determined that there is a vehicle obstacle when the distance measurement value of the infrared beam of the target proportion is changed.
And 103, identifying the license plate region of the first image to generate license plate region information.
In some embodiments, the execution subject may perform license plate region recognition on the first image to generate license plate region information. The license plate region information can represent a region where a license plate in the first image is located. In practice, the license plate region information may include coordinates of corner points of a region where the license plate is located.
As an example, the execution subject may perform license plate region recognition on the first image through a YOLO (young Only Look Once) model to generate license plate region information.
In some optional implementation manners of some embodiments, the performing the license plate region recognition on the first image by the execution subject to generate license plate region information may include:
firstly, performing image binarization processing on the first image to generate a binarized image.
As an example, the execution subject may perform maximum-minimum binarization processing on the first image to generate the binarized image.
And step two, carrying out uniform light treatment on the image after the binarization treatment to generate an image after the uniform light treatment.
As an example, the executing body may perform the dodging process on the binarized image through a variational Mask adaptive dodging algorithm to generate a dodged image.
And thirdly, carrying out image correction processing on the image after the dodging processing to generate a corrected image.
As an example, the execution subject may perform image distortion correction on the dodged image to generate the corrected image.
And fourthly, carrying out connected region identification on the corrected image to generate a candidate connected region set.
And the candidate connected regions in the candidate connected region set are connected regions included in the corrected image. For example, the candidate connected region may be a region where a license plate is located. For another example, the candidate communication region may be a region where an intake grill of the vehicle is located. The execution subject may perform connected region recognition on the corrected image by a seed filling method to generate a candidate connected region set.
Fifthly, executing the following second processing steps for each candidate connected region in the candidate connected region set:
the first substep is to perform corner detection on the candidate connected regions to generate a corner information set.
And the corner information in the corner information set represents the corners of the candidate connected regions. In practice, the corner point information may be characterized by corner point coordinates.
As an example, the executive body may perform corner detection on the candidate connected regions through a corner detection algorithm to generate a corner information set. For example, the corner detection algorithm may be a Harris corner detection algorithm.
And a second sub-step, performing region fitting according to the corner information in the corner information set to generate a fitting region type.
And the fitting region type represents the region type of the region after region fitting. In practice, the fit region type may include, but is not limited to, any of the following: rectangular type, circular type, trapezoidal type, other types.
And a third substep, determining the confidence of the candidate connected region in response to the fact that the fitting region type is consistent with a preset fitting region type, and obtaining a confidence value.
The preset fitting region type may be a rectangular type. The confidence value represents the probability that the candidate connected region is the region where the license plate is located. The execution subject may use a corresponding recognition confidence of the YOLO model in license plate region recognition as the confidence value. The recognition confidence coefficient represents the confidence coefficient of the recognized region as the license plate region.
And a fourth substep of generating the license plate region information according to the candidate connected region information in response to determining that the confidence value is greater than a preset threshold value.
Wherein, the preset threshold may be 95%. The execution subject may determine a region position corresponding to the candidate connected region information as a region position corresponding to the license plate region information, so as to generate the license plate region information.
When the license plate region information is generated, namely the license plate region is identified, the character color, the character type and the character distribution of the license plate do not need to be identified, so that the data processing amount can be greatly reduced by carrying out binarization processing on the image in order to improve the identification efficiency. In addition, considering that the image acquisition may be influenced by an external light source, the brightness of the high exposure area is weakened through the dodging treatment, and the brightness of the low exposure area is improved. Then, considering that the image obtained by image acquisition may have a certain degree of distortion, the image distortion is eliminated through the image correction processing. In addition, considering that the region corresponding to the license plate is a closed region, namely a connected region, the identification can be rapidly and efficiently carried out through the identification of the connected region. But due to interference with other communication areas such as the intake grill. Therefore, the region corresponding to the non-license plate is removed by performing region fitting based on the angular point information and comparing the region fitting with the preset fitting region type. Finally, considering that the above method may still have a certain error, the confidence value is combined to perform further filtering, so that the accuracy of generating the license plate region information can be improved.
And step 104, generating a scaling according to the license plate region information and the standard license plate region information.
In some embodiments, the execution subject may generate the scaling according to the license plate region information and the standard license plate region information. The standard license plate region information can be preset region information of a license plate in an image. In practice, the execution subject may determine an area ratio of a region corresponding to the license plate region information to a region corresponding to standard license plate region information as the scaling. When the zoom ratio is 1, the focal length of the camera is not required to be zoomed, and when the zoom ratio is less than 1, the focal length of the camera is amplified. And when the scaling is larger than 1, reducing the focal length of the camera.
And 105, zooming the focal length of the camera according to the zoom ratio, and controlling the camera with the zoomed focal length to acquire an image to obtain a second image set.
In some embodiments, the execution main body may perform focal length scaling on the camera according to the scaling, and control the camera after the focal length scaling to perform image acquisition, so as to obtain the second image set. The second image set is a plurality of images continuously acquired by the camera after the focal length is zoomed.
As an example, the execution body may perform focus zooming on the camera according to the corresponding relationship between the zoom ratio and the focus. And after the focal length zooming is finished, controlling the camera subjected to the focal length zooming to acquire an image to obtain a second image set.
And 106, carrying out image fusion on the second images in the second image set to generate a fused image.
In some embodiments, the performing subject may perform image fusion on the second images in the second set of images to generate a fused image.
As an example, first, the execution subject may perform image shake elimination on each of the second images in the second image set to generate a third image, resulting in a third image set. Then, the executing subject may perform image superimposition on the third image in the third image set to generate the fused image.
In some optional implementations of some embodiments, the performing subject performing image fusion on the second image in the second image set to generate a fused image may include:
for each second image in the second image set, performing a third processing step:
firstly, carrying out key point detection on the second image to generate a key point information group.
Wherein, the key point information in the key point information group includes: keypoint coordinates and keypoint feature vectors. The key point feature vector is a feature vector of a key point corresponding to the key point coordinates.
As an example, the execution subject may perform keypoint detection on the second image by using a Scale-invariant Feature Transform (SIFT) algorithm to generate a keypoint information group.
And secondly, generating a key point network graph corresponding to the second image according to the key point coordinates included in the key point information group, wherein the key point network graph is used as a first identification feature corresponding to the second image.
The execution subject may connect the keypoints corresponding to the keypoint coordinates included in the keypoint information group to generate a keypoint network map corresponding to a second image, which is used as a first identification feature corresponding to the second image.
And thirdly, performing vector splicing on the key point feature vectors included in the key point information group to generate spliced vectors serving as second identification features corresponding to the second image.
And fourthly, clustering the second identification features corresponding to the second images in the second image set to obtain a clustered image set.
And the clustered images in the clustered image group are second images corresponding to the same cluster center. In practice, the executing subject may perform clustering processing on the second identification features corresponding to the second images in the second image set through a K-means algorithm, so as to obtain a clustered image set.
And fifthly, screening the clustered image groups meeting the screening condition from the clustered image group set to serve as target image groups.
Wherein, the screening conditions are as follows: the number of the clustered images in the clustered image group is the same as the number of the clustered images in the clustered image group containing the most clustered images in the clustered image group set.
Sixthly, according to the target image group, executing the following first image fusion step:
the first substep is to randomly select a target image from the target image group as a reference image.
And a second substep of obtaining a candidate image group by using the target images except the reference image in the target image group as candidate images.
A third substep of, for each candidate image in the set of candidate images, performing the following second image fusion step:
step 1: and performing feature rotation and movement on the first identification feature corresponding to the candidate image.
The executing body may perform clockwise feature rotation on the first recognition feature corresponding to the candidate image and move by a preset step length.
Step 2: in response to determining that the first identification feature corresponding to the candidate image is aligned with the first identification feature corresponding to the reference image, the candidate image is superimposed onto the reference image according to the amount of rotation and the amount of movement corresponding to the candidate image.
And a fourth substep of determining the superimposed image as the fused image.
And step 107, carrying out license plate recognition on the fused image to generate license plate information.
In some embodiments, the execution subject may perform license plate recognition on the fused image in various ways to generate license plate information. Wherein, license plate information includes: license plate number information and license plate type.
In some optional implementation manners of some embodiments, the performing the license plate recognition on the fused image by the performing body to generate license plate information may include the following steps:
firstly, license plate positioning is carried out on the fused image through a license plate positioning model in a pre-trained license plate recognition model so as to generate a license plate region image.
The license plate location model may be a YOLO model.
And secondly, performing license plate type recognition on the license plate region image through a license plate type recognition model in the license plate recognition model so as to generate the license plate type included by the license plate information.
The license plate type recognition model can be a convolutional neural network model with multiple classification layers. In practice, the above license plate types may include, but are not limited to, at least one of: the type of the small new energy vehicle, the type of the large new energy vehicle, the type of a large automobile, the type of a trailer, the type of a small automobile, the type of a vehicle to which an embassy or a frontier belongs, the type of a vehicle in hong Kong and Macau, the type of a coach vehicle, the type of a police vehicle and the type of an emergency vehicle.
And thirdly, identifying license plate numbers of the license plate area images through a license plate number identification model in the license plate identification model so as to generate license plate number information included in the license plate information.
The license plate number recognition model may be an RCNN (Region-based Convolutional Neural Networks) model. In practice, different license plate types can correspond to different license plate number recognition models. The model structures of different license plate number recognition models are the same, and different training samples can be adopted to carry out model training in the model training stage. In order to improve the training speed of the model, in practice, the license plate number recognition model can be pre-trained by adopting the communicated training samples, and model training is carried out by adopting the license plate image corresponding to the license plate type, so that the aim of improving the training speed is fulfilled. In practice, the license plate positioning model, the license plate type recognition model and the license plate number recognition model can be trained in parallel. License plate number recognition models corresponding to different license plate types can be trained in parallel.
The first step to the third step serve as an invention point of the disclosure, and the technical problem mentioned in the background technology is solved, namely that a single license plate recognition model is adopted to recognize license plates, and the recognition accuracy rate cannot be guaranteed, because the license plates are various in types and the character contents and the character arrangement of the license plates with different license plate matching types are different. Based on this, the present disclosure first performs license plate location using a license plate location model. And then, adopting a license plate type recognition model to recognize the license plate type. And finally, license plate number recognition is carried out by adopting a license plate number recognition model. In practice, the license plate recognition models with the same structure and obtained by training different training samples can be used for license plate number recognition in consideration of different license plate types with different license plate structures. The license plate number recognition model is trained by adopting the training samples corresponding to the license plate types, so that the recognition precision of the license plate number recognition model can be improved. In addition, the license plate positioning model, the license plate type recognition model and the license plate number recognition model are used for respectively carrying out license plate positioning, license plate type recognition and license plate number recognition. The task refinement is realized, and the problem of low model training efficiency caused by a more complex model structure is avoided. Meanwhile, in order to further improve the model training speed, the license plate positioning model, the license plate type recognition model and the license plate number recognition model can be trained in parallel. License plate number recognition models corresponding to different license plate types can also be trained in parallel. By the mode, the model training speed is guaranteed, and the license plates of different types are effectively identified.
In some optional implementations of some embodiments, the executing body may further perform the following processing steps:
and in a first step, in response to the fact that the license plate type is determined to be consistent with the target type, the vehicle barrier is controlled to be lifted.
Wherein, the target type is an emergency vehicle type.
And a second step of executing the following first processing steps in response to determining that the license plate type is inconsistent with the target type:
and the first substep is to perform database connection with a target license plate information base, and read the target license plate information in the target license plate information base to obtain a target license plate information group sequence.
The target license plate information base stores license plate information of passable vehicles. And the license plate information in the target license plate information group sequence is ordered. And the license plate information in the target license plate information group sequence is ordered.
The second substep, according to the target license plate information group sequence and the license plate information, executing the following license plate confirmation steps:
substep 1: and determining the mean value of the first index value and the second index value to obtain a target index value.
The first index value is the index value of the first target license plate information group in the target license plate information group sequence. The second index value is the index value of the last target license plate information group in the sequence of target license plate information groups.
Substep 2: and determining a target license plate information group corresponding to the target index value in the target license plate information group sequence as a candidate license plate information group.
Substep 3: and in response to the fact that the license plate information meets the screening condition, determining whether the candidate license plate information group has the candidate license plate information which is the same as the license plate information by adopting binary search.
Wherein, the screening conditions are as follows: and the license plate information is greater than or equal to the candidate license plate information at the third index position in the candidate license plate information group, and the license plate information is less than or equal to the candidate license plate information at the fourth index position in the candidate license plate information group. The third index position is the index position of the first candidate license plate information in the candidate license plate information group. The fourth index position is the index position of the last candidate license plate information in the candidate license plate information group.
Substep 4: in response to the presence, the vehicle boom is controlled to lift.
Substep 5: and responding to the absence, and displaying prompt information at the information prompt end.
And a third substep, in response to the fact that the license plate information does not meet the screening condition, determining a target license plate information group between the first index value and the target index value as a target license plate information group sequence, and executing the license plate confirmation step again.
The above embodiments of the present disclosure have the following beneficial effects: according to the license plate recognition method disclosed by some embodiments, the success rate and the accuracy rate of license plate recognition are ensured. Specifically, the reasons for the low success rate and accuracy of license plate recognition are as follows: firstly, the parking positions of the vehicles are often different, so that the number plate in the image acquired by the fixed-focus camera is possibly smaller, and the number plate identification is failed; secondly, due to the influence of environmental factors, images acquired by the camera are often not clear, and therefore the accuracy of license plate recognition is influenced. Based on this, in the license plate recognition method according to some embodiments of the present disclosure, first, a distance measurement is performed on a target area through an infrared distance measurement sensor included in a license plate recognition device to generate a distance measurement result, where the infrared distance measurement sensor is configured to emit a plurality of coplanar infrared beams. In practical situations, when performing vehicle sensing, the following method is generally adopted: the method comprises the steps of comparing a preset image without a vehicle with a current image to judge whether the vehicle exists or not, or directly carrying out vehicle identification on a collected image. The two modes both need to identify the vehicle in real time on the acquired image, and the identification efficiency is low. In addition, an infrared distance measuring sensor capable of emitting single-beam infrared rays can be used for distance measurement. But do not effectively distinguish between the vehicle and other obstacles. Therefore, according to the infrared distance measuring sensor capable of emitting the plurality of coplanar infrared beams, due to the fact that the size of the vehicle is larger than that of other obstacles, whether the obstacle is the vehicle or not can be effectively determined by the aid of the coplanar infrared beams. And secondly, in response to the fact that the measured result represents that the obstacle exists, starting a camera included in the license plate recognition device to collect images to obtain a first image. And when the vehicle exists, starting the camera again to acquire images. Further, license plate region recognition is carried out on the first image to generate license plate region information. Thus, the area where the license plate is located is identified. And then, generating a scaling according to the license plate region information and the standard license plate region information. The parking positions of the vehicles are different, so that the number plate in the acquired image is possibly small, and the focal length of the camera is adjusted by generating a scaling ratio, so that the proper proportion of the number plate in the image is ensured. In addition, according to the zoom ratio, the camera is zoomed in and out, and the camera after the zoom in and out is controlled to acquire images, so that a second image set is obtained. Then, image fusion is performed on the second images in the second image set to generate a fused image. The problem that images acquired by a camera are not clear frequently exists under the influence of environmental factors, and therefore the purpose of improving the image definition is achieved by acquiring a plurality of images and fusing the images. And finally, performing license plate recognition on the fused image to generate license plate information, wherein the license plate information comprises: license plate number information. By the method, the success rate and the accuracy rate of vehicle identification are greatly improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a license plate recognition apparatus, which correspond to those shown in fig. 1, and which can be applied in various electronic devices.
As shown in fig. 2, a license plate recognition device 200 of some embodiments includes: the system comprises a distance measuring unit 201, an image acquisition unit 202, a license plate region recognition unit 203, a generation unit 204, a focal length scaling unit 205, an image fusion unit 206 and a license plate recognition unit 207. A distance measuring unit 201 configured to perform distance measurement on a target area by an infrared ranging sensor included in the license plate recognition device for emitting a plurality of coplanar infrared beams to generate a ranging result; the image acquisition unit 202 is configured to respond to the determination that the measurement result represents that an obstacle exists, and start a camera included in the license plate recognition device to perform image acquisition to obtain a first image; a license plate region recognition unit 203 configured to perform license plate region recognition on the first image to generate license plate region information; a generating unit 204 configured to generate a scaling according to the license plate region information and the standard license plate region information; a focal length zooming unit 205, configured to zoom the focal length of the camera according to the zoom ratio, and control the camera after the zoom of the focal length to perform image acquisition, so as to obtain a second image set; an image fusion unit 206 configured to perform image fusion on a second image in the second image set to generate a fused image; and a license plate recognition unit 207 configured to perform license plate recognition on the fused image to generate license plate information.
It is to be understood that the units described in the license plate recognition device 200 correspond to the respective steps in the method described with reference to fig. 1. Thus, the operations, features and advantageous effects of the method described above are also applicable to the license plate recognition device 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, shown is a block diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: measuring the distance of a target area through an infrared distance measuring sensor included by the license plate recognition device to generate a distance measuring result, wherein the infrared distance measuring sensor is used for emitting a plurality of coplanar infrared beams; in response to the fact that the measuring result represents that the obstacle exists, starting a camera included in the license plate recognition device to collect images to obtain a first image; carrying out license plate region identification on the first image to generate license plate region information; generating a scaling according to the license plate region information and the standard license plate region information; according to the zoom ratio, zooming the focal length of the camera, and controlling the camera with the zoomed focal length to acquire an image to obtain a second image set; performing image fusion on a second image in the second image set to generate a fused image; and identifying the license plate of the fused image to generate license plate information, wherein the license plate information comprises: license plate number information.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, which may be described as: a processor comprises a distance measuring unit, an image collecting unit, a license plate region identifying unit, a generating unit, a focal length zooming unit, an image fusing unit and a license plate identifying unit. The names of the units do not limit the units themselves in some cases, for example, the image capturing unit may be further described as "a unit that turns on a camera included in the license plate recognition device to capture an image and obtain a first image in response to determining that the measurement result indicates that an obstacle exists".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program that, when executed by a processor, implements any of the license plate recognition methods described above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A license plate recognition method includes:
the method comprises the steps that distance measurement is carried out on a target area through an infrared distance measuring sensor included by a license plate recognition device to generate a distance measuring result, wherein the infrared distance measuring sensor is used for emitting a plurality of coplanar infrared beams;
in response to the fact that the measuring result represents that the obstacle exists, starting a camera included in the license plate recognition device to collect images to obtain a first image;
carrying out license plate region identification on the first image to generate license plate region information;
generating a scaling according to the license plate region information and the standard license plate region information;
according to the zoom ratio, zooming the focal length of the camera, and controlling the camera with the zoomed focal length to acquire an image to obtain a second image set;
performing image fusion on a second image in the second image set to generate a fused image;
and identifying the license plate of the fused image to generate license plate information, wherein the license plate information comprises: license plate number information.
2. The method of claim 1, wherein the license plate information further comprises: a license plate type; and
the method further comprises the following steps:
controlling the vehicle barrier to lift in response to determining that the license plate type is consistent with the target type;
in response to determining that the license plate type is inconsistent with the target type, performing the following first processing step:
the method comprises the steps of connecting a target license plate information base with a database, reading target license plate information in the target license plate information base, and obtaining a target license plate information group sequence, wherein license plate information in the target license plate information group sequence is ordered, and license plate information in a target license plate information group in the target license plate information group sequence is ordered;
and executing the following license plate confirmation steps according to the target license plate information group sequence and the license plate information:
determining an average value of a first index value and a second index value to obtain a target index value, wherein the first index value is the index value of a first target license plate information group in a target license plate information group sequence, and the second index value is the index value of a last target license plate information group in the target license plate information group sequence;
determining a target license plate information group corresponding to a target index value in the target license plate information group sequence as a candidate license plate information group;
in response to the fact that the license plate information meets the screening condition, whether candidate license plate information identical to the license plate information exists in a candidate license plate information group is determined by adopting binary search, wherein the screening condition is as follows: the license plate information is more than or equal to candidate license plate information at a third index position in the candidate license plate information group, and the license plate information is less than or equal to candidate license plate information at a fourth index position in the candidate license plate information group, the third index position is the index position of the first candidate license plate information in the candidate license plate information group, and the fourth index position is the index position of the last candidate license plate information in the candidate license plate information group;
controlling the vehicle boom to raise in response to the presence;
responding to the absence, and displaying prompt information at the information prompt end;
and in response to the fact that the license plate information does not meet the screening condition, determining a target license plate information group between the first index value and the target index value as a target license plate information group sequence, and executing the license plate confirmation step again.
3. The method of claim 2, wherein the license plate region identifying the first image to generate license plate region information comprises:
carrying out image binarization processing on the first image to generate a binarized image;
carrying out dodging processing on the image after the binarization processing to generate a dodged image;
carrying out image correction processing on the image after the dodging processing to generate a corrected image;
performing connected region identification on the corrected image to generate a candidate connected region set;
for each candidate connected region in the candidate connected region set, performing the following second processing steps:
carrying out corner detection on the candidate connected regions to generate a corner information set;
performing region fitting according to the corner information in the corner information set to generate a fitting region type;
in response to the fact that the type of the fitting region is consistent with the type of a preset fitting region, determining the confidence coefficient of the candidate connected region to obtain a confidence value, wherein the confidence value represents the probability that the candidate connected region is the region where the license plate is located;
and generating the license plate region information according to the candidate connected region information in response to the fact that the confidence value is larger than a preset threshold value.
4. The method of claim 3, wherein the image fusing the second images of the second set of images to generate a fused image comprises:
for each second image of the set of second images, performing a third processing step:
performing keypoint detection on the second image to generate a keypoint information group, wherein the keypoint information in the keypoint information group comprises: key point coordinates and key point feature vectors;
generating a key point network graph corresponding to the second image according to the key point coordinates included in the key point information group, wherein the key point network graph is used as a first identification feature corresponding to the second image;
performing vector splicing on the key point feature vectors included in the key point information group to generate spliced vectors serving as second identification features corresponding to the second image;
and clustering second identification features corresponding to second images in the second image set to obtain a clustered image set, wherein the clustered images in the clustered image set are second images corresponding to the same cluster center.
5. The method of claim 4, wherein the image fusing the second images of the second set of images to generate a fused image, further comprises:
screening the clustered image groups meeting the screening condition from the clustered image group set to serve as target image groups;
according to the target image group, the following first image fusion step is performed:
randomly selecting a target image from the target image group as a reference image;
taking the target images except the reference image in the target image group as candidate images to obtain a candidate image group;
for each candidate image in the set of candidate images, performing the following second image fusion step:
performing feature rotation and movement on a first identification feature corresponding to the candidate image;
in response to determining that the first identification feature corresponding to the candidate image is aligned with the first identification feature corresponding to the reference image, superimposing the candidate image onto the reference image according to the amount of rotation and the amount of movement corresponding to the candidate image;
and determining the superposed image as the fusion image.
6. The method of claim 5, wherein the license plate recognition of the fused image to generate license plate information comprises:
positioning the license plate of the fusion image through a license plate positioning model in a pre-trained license plate recognition model to generate a license plate region image;
performing license plate type recognition on the license plate region image through a license plate type recognition model in the license plate recognition model to generate a license plate type included in the license plate information;
and identifying license plate numbers of the license plate region images through a license plate number identification model in the license plate identification model so as to generate license plate number information included in the license plate information.
7. A license plate recognition device comprising:
a distance measuring unit configured to perform distance measurement on a target area through an infrared ranging sensor included in the license plate recognition device to generate a ranging result, wherein the infrared ranging sensor is used for emitting a plurality of coplanar infrared beams;
the image acquisition unit is configured to respond to the fact that the measuring result represents that an obstacle exists, and a camera included in the license plate recognition device is started to acquire an image to obtain a first image;
a license plate region recognition unit configured to perform license plate region recognition on the first image to generate license plate region information;
the generating unit is configured to generate a scaling according to the license plate region information and standard license plate region information;
the focal length zooming unit is configured to zoom the focal length of the camera according to the zooming proportion and control the camera with the zoomed focal length to acquire an image to obtain a second image set;
an image fusion unit configured to image-fuse second images of the second set of images to generate a fused image;
and the license plate recognition unit is configured to perform license plate recognition on the fused image so as to generate license plate information.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 6.
CN202211291436.1A 2022-10-21 2022-10-21 License plate recognition method, device, electronic equipment, readable medium and program product Active CN115690765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211291436.1A CN115690765B (en) 2022-10-21 2022-10-21 License plate recognition method, device, electronic equipment, readable medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211291436.1A CN115690765B (en) 2022-10-21 2022-10-21 License plate recognition method, device, electronic equipment, readable medium and program product

Publications (2)

Publication Number Publication Date
CN115690765A true CN115690765A (en) 2023-02-03
CN115690765B CN115690765B (en) 2023-06-13

Family

ID=85067386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211291436.1A Active CN115690765B (en) 2022-10-21 2022-10-21 License plate recognition method, device, electronic equipment, readable medium and program product

Country Status (1)

Country Link
CN (1) CN115690765B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116362267A (en) * 2023-03-02 2023-06-30 中关村科学城城市大脑股份有限公司 Identification method and device for vehicle-mounted storage battery, electronic equipment and readable medium
CN116704490A (en) * 2023-08-02 2023-09-05 苏州万店掌网络科技有限公司 License plate recognition method, license plate recognition device and computer equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1096626A (en) * 1996-09-20 1998-04-14 Oki Electric Ind Co Ltd Detector for distance between vehicles
CN202230531U (en) * 2011-10-24 2012-05-23 上海理工大学 Gate machine control system for parting lot
CN104282153A (en) * 2013-06-08 2015-01-14 无锡北斗星通信息科技有限公司 Intelligent recognition device for illegal operation automobile
CN105046960A (en) * 2015-07-10 2015-11-11 潘进 Method and apparatus for analyzing road congestion state and detecting illegal parking
CN105608214A (en) * 2015-12-30 2016-05-25 杭州中奥科技有限公司 Method for searching under-surveillance license plate numbers fast
US20190370588A1 (en) * 2018-05-31 2019-12-05 Sony Corporation Estimating grouped observations
CN111368616A (en) * 2019-07-24 2020-07-03 杭州海康威视系统技术有限公司 Method, device and equipment for identifying slave vehicle
CN213814996U (en) * 2020-12-07 2021-07-27 湖北兴屹工程技术有限公司 Intelligent weak current system based on remote control
US11227174B1 (en) * 2019-06-10 2022-01-18 James Alves License plate recognition
CN115203238A (en) * 2022-08-10 2022-10-18 深圳市神州路路通网络科技有限公司 License plate information input method and device, terminal equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1096626A (en) * 1996-09-20 1998-04-14 Oki Electric Ind Co Ltd Detector for distance between vehicles
CN202230531U (en) * 2011-10-24 2012-05-23 上海理工大学 Gate machine control system for parting lot
CN104282153A (en) * 2013-06-08 2015-01-14 无锡北斗星通信息科技有限公司 Intelligent recognition device for illegal operation automobile
CN105046960A (en) * 2015-07-10 2015-11-11 潘进 Method and apparatus for analyzing road congestion state and detecting illegal parking
CN105608214A (en) * 2015-12-30 2016-05-25 杭州中奥科技有限公司 Method for searching under-surveillance license plate numbers fast
US20190370588A1 (en) * 2018-05-31 2019-12-05 Sony Corporation Estimating grouped observations
US11227174B1 (en) * 2019-06-10 2022-01-18 James Alves License plate recognition
CN111368616A (en) * 2019-07-24 2020-07-03 杭州海康威视系统技术有限公司 Method, device and equipment for identifying slave vehicle
CN213814996U (en) * 2020-12-07 2021-07-27 湖北兴屹工程技术有限公司 Intelligent weak current system based on remote control
CN115203238A (en) * 2022-08-10 2022-10-18 深圳市神州路路通网络科技有限公司 License plate information input method and device, terminal equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANLIN WANG等: ""Sequence recognition of Chinese license plates"", 《NEUROCOMPUTING》 *
赵琦等: ""智能停车场的车牌识别及其定位"", 《科学咨询》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116362267A (en) * 2023-03-02 2023-06-30 中关村科学城城市大脑股份有限公司 Identification method and device for vehicle-mounted storage battery, electronic equipment and readable medium
CN116362267B (en) * 2023-03-02 2023-09-19 中关村科学城城市大脑股份有限公司 Identification method and device for vehicle-mounted storage battery, electronic equipment and readable medium
CN116704490A (en) * 2023-08-02 2023-09-05 苏州万店掌网络科技有限公司 License plate recognition method, license plate recognition device and computer equipment
CN116704490B (en) * 2023-08-02 2023-10-10 苏州万店掌网络科技有限公司 License plate recognition method, license plate recognition device and computer equipment

Also Published As

Publication number Publication date
CN115690765B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
EP3605394A1 (en) Method and apparatus for recognizing body movement
CN115690765B (en) License plate recognition method, device, electronic equipment, readable medium and program product
EP3637310A1 (en) Method and apparatus for generating vehicle damage information
CN111310770B (en) Target detection method and device
CN110119725B (en) Method and device for detecting signal lamp
KR102606734B1 (en) Method and apparatus for spoof detection
CN113255619B (en) Lane line recognition and positioning method, electronic device, and computer-readable medium
CN115540894B (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN112364843A (en) Plug-in aerial image target positioning detection method, system and equipment
CN115240157B (en) Method, apparatus, device and computer readable medium for persistence of road scene data
CN108470179B (en) Method and apparatus for detecting an object
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN112464921A (en) Obstacle detection information generation method, apparatus, device and computer readable medium
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN114998962A (en) Living body detection and model training method and device
CN113033715B (en) Target detection model training method and target vehicle detection information generation method
CN116363538B (en) Bridge detection method and system based on unmanned aerial vehicle
CN113409393B (en) Method and device for identifying traffic sign
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN113450459A (en) Method and device for constructing three-dimensional model of target object
CN116343143A (en) Target detection method, storage medium, road side equipment and automatic driving system
CN116453086A (en) Method and device for identifying traffic sign and electronic equipment
CN116009581A (en) Unmanned aerial vehicle inspection method for power transmission line, unmanned aerial vehicle control terminal and storage medium
CN115375739A (en) Lane line generation method, apparatus, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant