CN115731224A - License plate detection method and device, terminal equipment and storage medium - Google Patents
License plate detection method and device, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN115731224A CN115731224A CN202211529879.XA CN202211529879A CN115731224A CN 115731224 A CN115731224 A CN 115731224A CN 202211529879 A CN202211529879 A CN 202211529879A CN 115731224 A CN115731224 A CN 115731224A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- area
- license plate
- longitude
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 100
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000005070 sampling Methods 0.000 claims description 36
- 238000004590 computer program Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Traffic Control Systems (AREA)
Abstract
The application relates to the technical field of intelligent traffic systems, and provides a license plate detection method, a license plate detection device, terminal equipment and a storage medium. The method comprises the following steps: if the fact that the vehicle enters the snapshot area is detected, determining a first pixel coordinate of the vehicle in an image obtained by shooting of the target camera; determining a second pixel coordinate of a target detection frame surrounding a license plate of the vehicle in an image obtained by a target camera according to the first pixel coordinate; and sending a snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot to obtain a target image of the snapshot area, and performing license plate detection on an image in the target detection frame in the target image to obtain license plate information of the vehicle. The method can accurately position the position of the license plate in the image, thereby improving the accuracy of license plate detection.
Description
Technical Field
The application relates to the technical field of intelligent traffic systems, in particular to a license plate detection method and device, terminal equipment and a storage medium.
Background
The bayonet snapshot is to shoot, process and record passing vehicles in specific road scenes, such as urban traffic intersections, highway toll stations, tunnel entrances and exits and other places. In the process of snap shooting at the gate, vehicle body information such as the license plate of the vehicle, the color and the type of the vehicle and the like are mainly detected and recognized, and then the vehicle body information and the license plate are bound, so that the identity of the vehicle is determined, and the implementation efficiency of road traffic management is improved. However, the existing checkpoint snapshot system lacks an effective license plate positioning mechanism, and is difficult to accurately find the license plate position in the snapshot image, so that the license plate detection accuracy is low.
Disclosure of Invention
In view of this, embodiments of the present application provide a license plate detection method, an apparatus, a terminal device, and a storage medium, which can improve the accuracy of license plate detection.
A first aspect of an embodiment of the present application provides a license plate detection method, including:
if the fact that the vehicle enters the snapshot area is detected, determining a first pixel coordinate of the vehicle in an image obtained by shooting of a target camera;
determining a second pixel coordinate of a target detection frame surrounding a license plate of the vehicle in an image obtained by shooting of the target camera according to the first pixel coordinate;
and sending a snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target detection frame in the target image to obtain license plate information of the vehicle.
In the embodiment of the application, when the vehicle is detected to enter the snapshot area, first, a first pixel coordinate of the vehicle in an image obtained by shooting through a camera is determined; then, because the position of the license plate of the vehicle on the vehicle body is predictable, a second pixel coordinate of the target detection frame surrounding the license plate in the image shot by the camera can be predicted according to the first pixel coordinate; and then, sending a snapshot instruction carrying the second pixel coordinate to the camera to control the camera to shoot to obtain a target image in the snapshot area, and performing license plate detection on an image in the target image, which is in the target detection frame, so as to obtain corresponding license plate information. In the process, the position of the target detection frame surrounding the license plate in the image is obtained according to the position prediction of the vehicle in the image, and the position of the license plate in the image can be accurately positioned, so that the accuracy of license plate detection is improved.
In one implementation of the embodiment of the present application, whether the vehicle has entered the snapshot area may be determined by:
receiving body information of the vehicle detected by a laser radar;
acquiring a first longitude and latitude coordinate of the snapshot area through a high-precision map;
determining whether the vehicle has entered the snap-shot area according to the body information and the first longitude and latitude coordinate.
Further, the vehicle body information comprises the speed of the vehicle, the size of the vehicle body and longitude and latitude coordinates of the center of the vehicle body; the determining whether the vehicle has entered the snapshot area according to the body information and the first longitude and latitude coordinate may include:
calculating to obtain a third longitude and latitude coordinate corresponding to the vehicle body center of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle acquired by the laser radar in the current sampling period, the second longitude and latitude coordinate of the vehicle body center of the vehicle and the sampling period of the laser radar;
according to the body size of the vehicle and the third longitude and latitude coordinates, calculating to obtain a fourth longitude and latitude coordinate corresponding to the head part of the vehicle in the next sampling period;
comparing the first longitude and latitude coordinate with the fourth longitude and latitude coordinate, and determining whether the head part of the vehicle enters the snapshot area according to the comparison result;
if the head part of the vehicle enters the snapshot area, determining that the vehicle enters the snapshot area;
and if the head part of the vehicle does not enter the snapshot area, returning to execute the step of calculating and obtaining a third longitude and latitude coordinate corresponding to the center of the vehicle body of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle, the second longitude and latitude coordinate of the center of the vehicle body of the vehicle and the sampling period of the laser radar, which are obtained by the laser radar in the current sampling period.
Further, after determining that the vehicle has entered the snapshot area, the method may further include:
determining a lane in which the vehicle runs according to the fourth longitude and latitude coordinates;
and determining the camera corresponding to the lane as the target camera.
Further, the determining the first pixel coordinate of the vehicle in the image captured by the target camera may include:
acquiring a third pixel coordinate of the snapshot area in an image shot by the target camera;
and determining the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate and the third pixel coordinate.
Furthermore, the first longitude and latitude coordinate comprises a longitude and latitude coordinate of each corner of the capturing area, and the third pixel coordinate comprises a pixel coordinate of each corner in an image captured by the target camera; the determining the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate, and the third pixel coordinate may include:
determining the pixel coordinates of the head part of the vehicle in the image shot by the target camera according to the longitude and latitude coordinates of each corner point, the fourth longitude and latitude coordinates and the pixel coordinates of each corner point in the image shot by the target camera;
and determining the pixel coordinate of the head part of the vehicle in the image shot by the target camera as the first pixel coordinate.
In an implementation manner of the embodiment of the present application, the determining, according to the first pixel coordinate, a second pixel coordinate of a target detection frame surrounding a license plate of the vehicle in an image captured by the target camera may include:
taking a region corresponding to the first pixel coordinate as a reference region, and adjusting the reference region to enable the reference region to surround the head part and the license plate of the vehicle;
and determining the pixel coordinate corresponding to the adjusted reference area as the second pixel coordinate.
Further, the adjusting the reference region may include:
if the driving area of the vehicle is the lane-unchangeable area, setting the position and the size of the reference area as fixed values, and then adjusting the reference area according to the size and the speed of the vehicle body of the vehicle;
if the driving area of the vehicle is a lane-changeable area, setting the size of the reference area as a fixed value and setting the position of the reference area as a variable dynamically adjusted according to the lane in which the vehicle is driving, and then adjusting the reference area according to the body size and speed of the vehicle.
Further, the adjusting the reference area according to the body size and the speed of the vehicle may include:
determining an expansion ratio according to the body size of the vehicle, wherein the expansion ratio is in direct proportion to the body size;
performing expansion processing on the reference area according to the expansion ratio;
and performing offset processing on the position of the reference area in the vertical direction according to the speed of the vehicle.
In an implementation manner of the embodiment of the present application, before determining, if it is detected that a vehicle has entered a snapshot area, first pixel coordinates of the vehicle in an image captured by a target camera, the method may further include:
assigning a unique identification code to the vehicle when the vehicle is detected by a lidar;
after sending a snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target image, which is in the target detection frame, to obtain license plate information of the vehicle, the method further includes:
receiving the license plate information sent by the target camera;
and binding the license plate information and the unique identification code.
A second aspect of the embodiments of the present application provides a license plate detection apparatus, including:
the first pixel coordinate determination module is used for determining a first pixel coordinate of the vehicle in an image shot by a target camera if the vehicle is detected to enter a snapshot area;
the second pixel coordinate determination module is used for determining a second pixel coordinate of a target detection frame surrounding the license plate of the vehicle in an image obtained by the target camera according to the first pixel coordinate;
and the snapshot instruction sending module is used for sending a snapshot instruction carrying the second pixel coordinate to the target camera so as to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target image, which is located in the target detection frame, so as to obtain license plate information of the vehicle.
A third aspect of an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the license plate detection method provided in the first aspect of the embodiment of the present application when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the license plate detection method provided in the first aspect of embodiments of the present application.
A fifth aspect of the embodiments of the present application provides a computer program product, which, when running on a terminal device, enables the terminal device to execute the license plate detection method provided in the first aspect of the embodiments of the present application.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
FIG. 1 is a schematic diagram of a license plate detecting system according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a license plate detection method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a two-lane gate capture area provided in an embodiment of the present application;
fig. 4 is a schematic diagram of four corner points of a snapshot area provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a license plate detection apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
The bayonet snapshot is used for shooting, processing and recording passing vehicles in specific road scenes, such as urban traffic intersections, highway toll stations, tunnel entrances and exits. At present, an existing checkpoint snapshot system lacks an effective license plate positioning mechanism, and is difficult to accurately find out the license plate position in a snapshot image, so that the license plate detection accuracy is low. In view of this, embodiments of the present application provide a license plate detection method, an apparatus, a terminal device, and a storage medium, which can improve the accuracy of license plate detection. For more detailed technical implementation details of the embodiments of the present application, refer to the following embodiments.
Referring to fig. 1, a schematic diagram of a license plate detection system according to an embodiment of the present disclosure is shown. The license plate detection system shown in fig. 1 includes a terminal device, a camera, and a laser radar. The method comprises the following steps that a camera is used for capturing images of a designated area on a lane, the number and the installation position of the camera are not limited, and the camera can capture the images of the lane area successfully; in actual operation, a portal frame can be arranged at a toll station or a tunnel entrance and exit and other places, and a high-definition camera is arranged on the portal frame; the number and the positions of the installed cameras can correspond to the number and the positions of the lanes one by one, so that vehicles entering the lanes can be captured with high precision. The laser radar is used for sensing and tracking vehicles running into the detection range from the outside of the scene, and can detect vehicle point cloud data entering the detection range; in practical operation, a certain number of laser radars can be installed on the gantry, and the laser radars and the cameras can be installed at the same position. It should be noted that the detection area of the lidar should be set to be wider than the capture area of the camera, so that the lidar can detect the vehicle outside the capture area of the camera earlier to determine the timing for sending the capture instruction to the camera. The terminal equipment is the main control equipment of the license plate detection system and is respectively interacted with the camera and the laser radar; the terminal equipment is mainly responsible for tasks such as data acquisition, data analysis and processing, snapshot instruction output and the like. For details of the specific working principle and the technical implementation of the system shown in fig. 1, reference is made to the method embodiments described below.
Referring to fig. 2, a license plate detection method provided in an embodiment of the present application is shown, including:
201. if the fact that the vehicle enters the snapshot area is detected, determining a first pixel coordinate of the vehicle in an image obtained by shooting of the target camera;
it should be understood that the license plate detection method provided in the embodiment of the present application is applied to the license plate detection system shown in fig. 1, and the execution subject is a terminal device in the license plate detection system.
Firstly, the terminal device detects whether the vehicle has entered a snapshot area, and if it is detected that the vehicle has entered the snapshot area, pixel coordinates of the vehicle in an image captured by the target camera are determined, the pixel coordinates being represented by first pixel coordinates. For example, if a vehicle runs on the lane 1 and the camera a is responsible for capturing the vehicle on the lane 1, a certain range area on the lane 1, which is a certain distance (generally about 20 meters) away from the camera a, is a capture area, and the camera a is a target camera.
As shown in fig. 3, it is a schematic diagram of a two-lane bayonet capture area. In fig. 3, the position at 0m is the position of the gate camera, wherein the camera a is responsible for snapping the lane 1, and the camera B is responsible for snapping the lane 2; the position 19m-25m away from the bayonet camera is a rectangular snapshot area, the sum of the snapshot areas of all the cameras can cover all the lanes, wherein the snapshot area of the camera A is an area A, and the snapshot area of the camera B is an area B.
The terminal equipment can judge whether the vehicle enters the snapshot area in real time according to the vehicle point cloud data detected by the laser radar, and if the vehicle does not enter the snapshot area, the vehicle position detection is kept; if the vehicle has entered the capture area, the position of the vehicle in the image captured by the target camera, i.e. the first pixel coordinates, is determined.
In one implementation of the embodiment of the present application, whether the vehicle has entered the snapshot area may be determined by:
(1) Receiving body information of the vehicle detected by a laser radar;
(2) Acquiring a first longitude and latitude coordinate of the snapshot area through a high-precision map;
(3) Determining whether the vehicle has entered the snap-shot area according to the body information and the first longitude and latitude coordinate.
Before a vehicle does not enter a snapshot area, the vehicle enters a detection area of a laser radar, the laser radar can transmit detected vehicle point cloud data into a three-dimensional point cloud detection algorithm, vehicle body information such as real-time longitude and latitude coordinates, speed and size of the vehicle is obtained through detection, and then the vehicle body information is sent to a terminal device. The specific method for detecting the vehicle body information by the laser radar through the three-dimensional point cloud detection algorithm can refer to the prior art, and is not repeated herein. Since the geographic position of the snapshot area is known, the terminal device can acquire the longitude and latitude coordinates of the snapshot area through a high-precision map, wherein the longitude and latitude coordinates are represented by the first longitude and latitude coordinates. Then, the terminal device can determine whether the vehicle has entered the snapshot area according to the first longitude and latitude coordinate and the received vehicle body information. For example, the current position of the vehicle can be determined according to the vehicle body information, and then whether the position belongs to the position within the range corresponding to the first latitude coordinate or not is judged.
Further, the vehicle body information comprises the speed of the vehicle, the size of the vehicle body and longitude and latitude coordinates of the center of the vehicle body; the determining whether the vehicle has entered the snapshot area according to the body information and the first longitude and latitude coordinate may include:
(1) Calculating to obtain a third longitude and latitude coordinate corresponding to the vehicle body center of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle acquired by the laser radar in the current sampling period, the second longitude and latitude coordinate of the vehicle body center of the vehicle and the sampling period of the laser radar;
(2) According to the size of the vehicle body and the third longitude and latitude coordinates, calculating to obtain a fourth longitude and latitude coordinate corresponding to the head part of the vehicle in the next sampling period;
(3) Comparing the first longitude and latitude coordinate with the fourth longitude and latitude coordinate, and determining whether the head part of the vehicle enters the snapshot area according to the comparison result;
(4) If the head part of the vehicle enters the snapshot area, determining that the vehicle enters the snapshot area;
(5) And if the head part of the vehicle does not enter the snapshot area, returning to execute the step of calculating and obtaining a third longitude and latitude coordinate corresponding to the center of the vehicle body of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle, the second longitude and latitude coordinate of the center of the vehicle body of the vehicle and the sampling period of the laser radar, which are acquired by the laser radar in the current sampling period.
The laser radar detects according to the sensed vehicle point cloud data, and vehicle body information such as the speed of the vehicle, the size of the vehicle body, the longitude and latitude coordinates of the center of the vehicle body and the like can be obtained. When judging whether the vehicle enters the snapshot area or not according to the vehicle body information and the first longitude and latitude coordinate, the corresponding longitude and latitude coordinate (represented by the third longitude and latitude coordinate) of the vehicle body center of the vehicle in the next sampling period of the laser radar can be obtained through calculation according to the speed of the vehicle, the longitude and latitude coordinate (represented by the second longitude and latitude coordinate) of the vehicle body center and the sampling period of the laser radar, which are acquired by the laser radar in the current sampling period.
For example, assuming that the detection speed of the laser radar is 100ms, that is, the sampling period is 100ms, since this time is very short, it can be considered that the vehicle is moving in a straight line at a constant speed. On the basis of uniform linear motion, the vehicle body predicted distance can be calculated according to the following formula, and the vehicle body predicted distance represents the advancing distance of the vehicle in the interval time period among laser radar frames, and the unit is meter:
x pre =v cor ×v×t+x cor
wherein,x pre indicating the predicted distance of the vehicle body, v cor Representing the speed correction factor, v representing the detected vehicle speed, t representing the inter-frame time interval, i.e. the sampling period, x cor The distance correction coefficient is represented. In general, the speed correction coefficient is 1, and the distance correction coefficient is 0, that is, no adjustment is made; when the positions of the license plate detection frames obtained in the subsequent steps are not accurate, the two coefficients can be properly adjusted.
After the vehicle body predicted distance is obtained, the longitude and latitude coordinates (second longitude and latitude coordinates) of the vehicle body center acquired by the laser radar in the current sampling period and the vehicle body predicted distance can be combined to calculate and obtain the corresponding longitude and latitude coordinates (third longitude and latitude coordinates) of the vehicle body center in the next sampling period of the laser radar.
Then, according to the body size of the vehicle and the third longitude and latitude coordinates, the longitude and latitude coordinates (expressed by the fourth longitude and latitude coordinates) corresponding to the head part of the vehicle in the next sampling period of the laser radar can be calculated. Specifically, the width of the vehicle and the distance between the vehicle head and the center of the vehicle body can be obtained according to the size of the vehicle body, so that the longitude and latitude coordinates of the vehicle head part can be obtained through calculation by combining the longitude and latitude coordinates of the center of the vehicle body, the width of the vehicle and the distance between the vehicle head and the center of the vehicle body, for example, the longitude and latitude coordinates of the left end and the right end of the vehicle head.
Then, the longitude and latitude coordinates (first longitude and latitude coordinates) of the snapshot area and the longitude and latitude coordinates (fourth longitude and latitude coordinates) of the vehicle head portion are compared, and whether the vehicle head portion of the vehicle enters the snapshot area or not can be judged. For example, it may be determined whether the longitude and latitude coordinates of the vehicle head portion are within a range corresponding to the longitude and latitude coordinates of the snapshot area, if so, it indicates that the vehicle head portion of the vehicle has entered the snapshot area, otherwise, it indicates that the vehicle head portion of the vehicle has not entered the snapshot area.
If the head portion of the vehicle has entered the snapshot area, it may be determined that the vehicle has entered the snapshot area. Since the head portion of the vehicle enters the snapshot area first when the vehicle is traveling, it is most accurate to determine whether the vehicle has entered the snapshot area based on the position of the head portion. If the head part of the vehicle does not enter the snapshot area, the vehicle can be determined not to enter the snapshot area, the detection result of the laser radar of the next frame is continuously read at the moment, and the process is repeated until the vehicle is determined to enter the snapshot area.
Further, after determining that the vehicle has entered the snapshot area, the method may further include:
(1) Determining a lane in which the vehicle runs according to the fourth longitude and latitude coordinates;
(2) And determining the camera corresponding to the lane as the target camera.
When the vehicle is detected to enter the snapshot area, the lane number can be judged according to the longitude and latitude coordinates (the fourth longitude and latitude coordinates) of the vehicle head part, namely, the lane where the vehicle runs is determined, and therefore the camera is used for snapshot. For example, since the longitude and latitude coordinates of each lane are known, the lane in which the vehicle is traveling can be determined according to the longitude and latitude coordinates of the head portion and the longitude and latitude coordinates of each lane, and if lane 1 is assumed, the camera corresponding to lane 1 can be determined as the target camera, that is, the target camera is used to capture the image of the vehicle.
Further, the determining the first pixel coordinate of the vehicle in the image captured by the target camera may include:
(1) Acquiring a third pixel coordinate of the snapshot area in an image shot by the target camera;
(2) And determining the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate and the third pixel coordinate.
When determining the first pixel coordinates of the vehicle in the image captured by the target camera, the position of the capture area in the image captured by the target camera may be acquired, and the position is represented by the third pixel coordinates. In actual operation, the target camera may be triggered to capture an image of the capture area, and then pixel coordinates of a position where the capture area is located in the image (specifically, pixel coordinates of four corner points of the rectangular capture area) are detected. Then, the pixel coordinates (first pixel coordinates) of the vehicle (mainly the head portion) in the image captured by the target camera can be determined according to the longitude and latitude coordinates (first longitude and latitude coordinates) of the capturing area, the longitude and latitude coordinates (fourth longitude and latitude coordinates) of the vehicle head portion, and the pixel coordinates (third pixel coordinates) of the capturing area. During specific operation, the relative position relation of the vehicle head part in the snapshot area can be determined according to the longitude and latitude coordinates of the vehicle head part and the longitude and latitude coordinates of the snapshot area, and then the pixel coordinates of the vehicle head part can be determined according to the relative position relation and the pixel coordinates of the snapshot area.
Furthermore, the first longitude and latitude coordinate comprises a longitude and latitude coordinate of each corner of the capturing area, and the third pixel coordinate comprises a pixel coordinate of each corner in an image captured by the target camera; the determining the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate, and the third pixel coordinate may include:
(1) Determining the pixel coordinates of the head part of the vehicle in the image shot by the target camera according to the longitude and latitude coordinates of each corner point, the fourth longitude and latitude coordinates and the pixel coordinates of each corner point in the image shot by the target camera;
(2) And determining the pixel coordinate of the head part of the vehicle in the image obtained by the target camera as the first pixel coordinate.
The latitude and longitude coordinates of the snapshot area may specifically include latitude and longitude coordinates of four corner points, and the pixel coordinates of the snapshot area may also specifically include pixel coordinates of four corner points. According to the longitude and latitude coordinates of the four angular points and the longitude and latitude coordinates of the vehicle head part, the relative position relation between the vehicle head part and the four angular points can be determined; then, the pixel coordinates of the four corner points and the relative position relation are combined, and the pixel coordinates of the vehicle head part can be deduced. As shown in fig. 4, the schematic diagram of four corner points of the snapshot region is that the snapshot region of the lane is usually a rectangular region, and the position of the snapshot region in the image captured by the target camera can be conveniently determined by obtaining the pixel coordinates of the four corner points of the rectangular region.
202. Determining a second pixel coordinate of a target detection frame surrounding a license plate of the vehicle in an image obtained by a target camera according to the first pixel coordinate;
after determining that the vehicle has entered the capture area and determining first pixel coordinates of the vehicle in the image captured by the object camera, the position of an object detection frame surrounding the license plate of the vehicle in the image captured by the object camera may be determined based on the first pixel coordinates, here represented by second pixel coordinates. Specifically, the area corresponding to the first pixel coordinate is used as a reference area, and the reference area is expanded in the horizontal and vertical directions to obtain a rectangular target detection frame surrounding the vehicle and the license plate of the vehicle. The second pixel coordinates of the target detection frame in the image may specifically be pixel coordinates of each corner of the detection frame, and may include, for example, pixel coordinates of an upper left corner and pixel coordinates of a lower right corner.
In an implementation manner of the embodiment of the present application, the determining, according to the first pixel coordinate, a second pixel coordinate of a target detection frame surrounding a license plate of the vehicle in an image captured by the target camera may include:
(1) Taking a region corresponding to the first pixel coordinate as a reference region, and adjusting the reference region to enable the reference region to surround the head part and the license plate of the vehicle;
(2) And determining the pixel coordinate corresponding to the adjusted reference area as the second pixel coordinate.
During operation, firstly, an area corresponding to the first pixel coordinate (that is, an area where the vehicle is located in the image) is obtained as a reference area, and then the reference area is adjusted (for example, expanded, stretched, and the like) so that the reference area surrounds the head portion and the license plate of the vehicle, and the adjusted reference area is the area where the target detection frame is located, so that the pixel coordinate corresponding to the adjusted reference area can be determined as a second pixel coordinate of the target detection frame in the image.
Further, the adjusting the reference region may include:
(1) If the driving area of the vehicle is the lane-unchangeable area, setting the position and the size of the reference area as fixed values, and then adjusting the reference area according to the size and the speed of the vehicle body of the vehicle;
(2) If the driving area of the vehicle is a lane-changeable area, setting the size of the reference area as a fixed value and setting the position of the reference area as a variable dynamically adjusted according to the lane in which the vehicle is driving, and then adjusting the reference area according to the body size and speed of the vehicle.
When the reference area is adjusted, different adjustment modes can be available for different road environment scenes. If the road environment scene is a scene such as a tunnel entrance, the driving area of the vehicle is an unchangeable area, the movement condition of the vehicle is simple (the condition of lane change driving cannot occur) under the scene, at this time, the position and the size of the reference area can be set to be fixed values according to the lane in which the vehicle drives, and then the reference area is correspondingly adjusted according to the size and the speed of the vehicle body. If the road environment scene is an ordinary road scene, the driving area of the vehicle is a lane-changing area, the movement situation of the vehicle is more complex under the scene (the lane-changing driving situation may occur), and at this time, the reference area with a fixed position is no longer applicable, so that the size of the reference area can be set to be a fixed value, the position of the reference area is set to be a variable dynamically adjusted according to the lane in which the vehicle is driving, and then the reference area is correspondingly adjusted according to the size and the speed of the vehicle body.
Further, the adjusting the reference area according to the body size and the speed of the vehicle may include:
(1) Determining an expansion ratio according to the size of the vehicle body, wherein the expansion ratio is in direct proportion to the size of the vehicle body;
(2) Performing expansion processing on the reference area according to the expansion ratio;
(3) And performing offset processing on the position of the reference area in the vertical direction according to the speed of the vehicle.
When the reference area is adjusted according to the body size and the speed of the vehicle, an expansion ratio (which is proportional to the body size) may be determined according to the body size of the vehicle, and then the reference area may be expanded according to the expansion ratio. For example, for a large vehicle such as a large truck and a tank truck, the size of the vehicle body is large, so that a large expansion ratio is determined to expand the reference area, and the expanded reference area can cover the whole head part and license plate of the large vehicle; for small vehicles such as cars, the reference area is expanded by determining a small expansion ratio due to a small vehicle body size, and the expanded reference area can cover the whole head part and license plate of the small vehicle. In addition, the position of the reference region in the vertical direction may be subjected to offset processing in accordance with the speed of the vehicle. For example, assuming that the vehicle is traveling from top to bottom in the image, if the speed of the vehicle is high, the position of the vehicle in the image may be shifted downward, and thus the position of the reference region may be shifted downward; if the speed of the vehicle is low, the position of the vehicle in the image may be deviated upward, and thus the position of the reference region may be deviated upward. By means of the arrangement, the position and the size of the target detection frame of the vehicle in the snapshot image can be adjusted in a self-adaptive mode according to the size and the speed of the vehicle body.
203. And sending a snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target detection frame in the target image to obtain license plate information of the vehicle.
After determining the second pixel coordinate of the target detection frame in the image, the terminal device may generate a corresponding snapshot instruction, where the snapshot instruction may include information such as snapshot time and the second pixel coordinate. The terminal equipment sends the snapshot instruction to the target camera, and the target camera starts shooting after receiving the snapshot instruction to obtain an image of a snapshot area, wherein the image is represented by a target image. And then, the target camera performs license plate detection on the image in the target detection frame in the target image so as to obtain the license plate information of the vehicle.
In an implementation manner of the embodiment of the present application, before determining, if it is detected that a vehicle has entered a snapshot area, first pixel coordinates of the vehicle in an image captured by a target camera, the method may further include:
assigning a unique identification code to the vehicle when the vehicle is detected by the lidar.
After sending the snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target image, which is in the target detection frame, to obtain license plate information of the vehicle, the method may further include:
(1) Receiving the license plate information sent by the target camera;
(2) And binding the license plate information and the unique identification code.
In order to determine the identity of the vehicle, the terminal device can assign a unique identification code, for example a vehicle ID, to the vehicle when the vehicle is detected by the lidar. And then, after the target camera sends the detected license plate information of the vehicle back to the terminal equipment, the terminal equipment can bind the license plate information with the unique identification code, so that the identity of the vehicle is determined. In addition, a plurality of laser radars are arranged in the road scene, so that all vehicles entering the scene can be tracked in a multi-target mode, and vehicle identity binding in the whole road scene is achieved.
In the embodiment of the application, when the vehicle is detected to enter the snapshot area, first, a first pixel coordinate of the vehicle in an image obtained by shooting through a camera is determined; then, because the position of the license plate of the vehicle on the vehicle body is predictable, a second pixel coordinate of the target detection frame surrounding the license plate in the image shot by the camera can be predicted according to the first pixel coordinate; and then, sending a snapshot instruction carrying the second pixel coordinate to the camera to control the camera to shoot to obtain a target image in the snapshot area, and performing license plate detection on an image in the target image, which is in the target detection frame, so as to obtain corresponding license plate information. In the process, the position of the target detection frame surrounding the license plate in the image is obtained according to the position prediction of the vehicle in the image, and the position of the license plate in the image can be accurately positioned, so that the accuracy of license plate detection is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
A license plate detection method is mainly described above, and a license plate detection apparatus will be described below.
Referring to fig. 5, an embodiment of a license plate detection device in an embodiment of the present application includes:
a first pixel coordinate determining module 501, configured to determine a first pixel coordinate of a vehicle in an image captured by a target camera if it is detected that the vehicle has entered a snapshot area;
a second pixel coordinate determining module 502, configured to determine, according to the first pixel coordinate, a second pixel coordinate of a target detection frame surrounding a license plate of the vehicle in an image captured by the target camera;
and a snapshot instruction sending module 503, configured to send a snapshot instruction carrying the second pixel coordinate to the target camera, so as to control the target camera to shoot a target image of the snapshot area, and perform license plate detection on an image in the target image, which is located in the target detection frame, to obtain license plate information of the vehicle.
In an implementation manner of the embodiment of the present application, the license plate detection device may further include:
the vehicle body information receiving module is used for receiving the vehicle body information of the vehicle detected by the laser radar;
the first longitude and latitude coordinate acquisition module is used for acquiring a first longitude and latitude coordinate of the snapshot area through a high-precision map;
and the vehicle entering judgment module is used for determining whether the vehicle enters the snapshot area or not according to the vehicle body information and the first longitude and latitude coordinate.
Further, the vehicle body information comprises the speed of the vehicle, the size of the vehicle body and longitude and latitude coordinates of the center of the vehicle body; the vehicle entry determination module may include:
the third longitude and latitude coordinate calculation unit is used for calculating and obtaining a third longitude and latitude coordinate corresponding to the vehicle body center of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle acquired by the laser radar in the current sampling period, the second longitude and latitude coordinate of the vehicle body center of the vehicle and the sampling period of the laser radar;
a fourth longitude and latitude coordinate calculation unit, configured to calculate a fourth longitude and latitude coordinate corresponding to the vehicle head portion of the vehicle in the next sampling period according to the vehicle body size of the vehicle and the third longitude and latitude coordinate;
the vehicle head entering judging unit is used for comparing the first longitude and latitude coordinate with the fourth longitude and latitude coordinate and determining whether the vehicle head part of the vehicle enters the snapshot area according to the comparison result;
a vehicle entry determination unit configured to determine that the vehicle has entered the snapshot area if a head portion of the vehicle has entered the snapshot area;
and the return execution unit is used for returning to execute the step of calculating and obtaining a corresponding third longitude and latitude coordinate of the vehicle body center of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle, the second longitude and latitude coordinate of the vehicle body center of the vehicle and the sampling period of the laser radar acquired by the laser radar in the current sampling period if the vehicle head part of the vehicle does not enter the snapshot area.
Further, the license plate detection device may further include:
the lane determining module is used for determining a lane in which the vehicle runs according to the fourth longitude and latitude coordinate;
and the camera determining module is used for determining the camera corresponding to the lane as the target camera.
Further, the first pixel coordinate determination module may include:
the third pixel coordinate acquisition unit is used for acquiring a third pixel coordinate of the snapshot area in an image obtained by shooting of the target camera;
a first pixel coordinate determination unit, configured to determine the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate, and the third pixel coordinate.
Furthermore, the first longitude and latitude coordinate comprises a longitude and latitude coordinate of each corner of the capturing area, and the third pixel coordinate comprises a pixel coordinate of each corner in an image captured by the target camera; the first pixel coordinate determination unit may include:
a vehicle head pixel coordinate determining subunit, configured to determine, according to the longitude and latitude coordinates of each corner point, the fourth longitude and latitude coordinates, and pixel coordinates of each corner point in the image captured by the target camera, a pixel coordinate of a vehicle head portion of the vehicle in the image captured by the target camera;
a first pixel coordinate determination subunit configured to determine, as the first pixel coordinate, a pixel coordinate of a head portion of the vehicle in an image captured by the subject camera.
In an implementation manner of the embodiment of the present application, the second pixel coordinate determining module may include:
a reference region adjusting unit, configured to take a region corresponding to the first pixel coordinate as a reference region, and adjust the reference region so that the reference region surrounds a head portion and a license plate of the vehicle;
and the second pixel coordinate determining unit is used for determining the pixel coordinate corresponding to the adjusted reference area as the second pixel coordinate.
Further, the reference region adjusting unit may include:
a first reference area adjusting subunit, configured to set, if a driving area of the vehicle is an unchangeable lane area, a position and a size of the reference area as fixed values, and then adjust the reference area according to a body size and a speed of the vehicle;
and a second reference region adjusting subunit, configured to, if a driving region of the vehicle is a variable lane region, set a size of the reference region as a fixed value and set a position of the reference region as a variable dynamically adjusted according to a lane in which the vehicle is driving, and then adjust the reference region according to a body size and a speed of the vehicle.
Further, the first reference region adjusting subunit and the second reference region adjusting subunit may include:
an expansion ratio determining subunit, configured to determine an expansion ratio according to a body size of the vehicle, where the expansion ratio is proportional to the body size;
an expansion processing subunit, configured to perform expansion processing on the reference region according to the expansion ratio;
an offset processing subunit configured to perform offset processing on a position of the reference area in the vertical direction according to a speed of the vehicle.
In an implementation manner of the embodiment of the present application, the license plate detection apparatus may further include:
a vehicle identification code assignment module for assigning a unique identification code to the vehicle when the vehicle is detected by the lidar;
the license plate information receiving module is used for receiving the license plate information sent by the target camera;
and the vehicle binding module is used for binding the license plate information and the unique identification code.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the license plate detection method represented by any of the above embodiments is implemented.
The embodiment of the present application further provides a computer program product, which when running on a terminal device, enables the terminal device to execute the license plate detection method represented by any one of the above embodiments.
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps of the above embodiments of the license plate detection method, such as the steps 201 to 203 shown in fig. 2. Alternatively, the processor 60, when executing the computer program 62, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 501 to 503 shown in fig. 5.
The computer program 62 may be divided into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, and software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (13)
1. A license plate detection method is characterized by comprising the following steps:
if the fact that the vehicle enters the snapshot area is detected, determining a first pixel coordinate of the vehicle in an image obtained by shooting of a target camera;
determining a second pixel coordinate of a target detection frame surrounding the license plate of the vehicle in an image obtained by the target camera according to the first pixel coordinate;
and sending a snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target detection frame in the target image to obtain license plate information of the vehicle.
2. The method of claim 1, wherein whether the vehicle has entered the snap-shot area is determined by:
receiving body information of the vehicle detected by a laser radar;
acquiring a first longitude and latitude coordinate of the snapshot area through a high-precision map;
determining whether the vehicle has entered the snap-shot area according to the body information and the first longitude and latitude coordinate.
3. The method of claim 2, wherein the body information includes a speed of the vehicle, a body size, and longitude and latitude coordinates of a body center; the determining whether the vehicle has entered the snap-shot area according to the body information and the first latitudinal coordinate includes:
calculating to obtain a third longitude and latitude coordinate corresponding to the vehicle body center of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle acquired by the laser radar in the current sampling period, the second longitude and latitude coordinate of the vehicle body center of the vehicle and the sampling period of the laser radar;
according to the body size of the vehicle and the third longitude and latitude coordinates, calculating to obtain a fourth longitude and latitude coordinate corresponding to the head part of the vehicle in the next sampling period;
comparing the first longitude and latitude coordinate with the fourth longitude and latitude coordinate, and determining whether the head part of the vehicle enters the snapshot area according to the comparison result;
if the head part of the vehicle enters the snapshot area, determining that the vehicle enters the snapshot area;
and if the head part of the vehicle does not enter the snapshot area, returning to execute the step of calculating and obtaining a third longitude and latitude coordinate corresponding to the center of the vehicle body of the vehicle in the next sampling period of the laser radar according to the speed of the vehicle, the second longitude and latitude coordinate of the center of the vehicle body of the vehicle and the sampling period of the laser radar, which are acquired by the laser radar in the current sampling period.
4. The method of claim 3, after determining that the vehicle has entered the snap-shot area, further comprising:
determining a lane in which the vehicle runs according to the fourth longitude and latitude coordinates;
and determining the camera corresponding to the lane as the target camera.
5. The method of claim 3, wherein determining the first pixel coordinates of the vehicle in the image captured by the subject camera comprises:
acquiring a third pixel coordinate of the snapshot area in an image shot by the target camera;
and determining the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate and the third pixel coordinate.
6. The method of claim 5, wherein the first latitudinal coordinate comprises a latitudinal and longitudinal coordinate of each corner of the snap-shot area, and the third pixel coordinate comprises a pixel coordinate of each corner in an image captured by the subject camera; the determining the first pixel coordinate according to the first longitude and latitude coordinate, the fourth longitude and latitude coordinate, and the third pixel coordinate includes:
determining the pixel coordinates of the head part of the vehicle in the image shot by the target camera according to the longitude and latitude coordinates of each corner point, the fourth longitude and latitude coordinates and the pixel coordinates of each corner point in the image shot by the target camera;
and determining the pixel coordinate of the head part of the vehicle in the image obtained by the target camera as the first pixel coordinate.
7. The method of claim 1, wherein determining second pixel coordinates of an object detection frame surrounding a license plate of the vehicle in an image captured by the object camera based on the first pixel coordinates comprises:
taking a region corresponding to the first pixel coordinate as a reference region, and adjusting the reference region to enable the reference region to surround the head part and the license plate of the vehicle;
and determining the pixel coordinate corresponding to the adjusted reference area as the second pixel coordinate.
8. The method of claim 7, wherein said adjusting said reference region comprises:
if the driving area of the vehicle is the lane-unchangeable area, setting the position and the size of the reference area as fixed values, and then adjusting the reference area according to the size and the speed of the vehicle body of the vehicle;
if the driving area of the vehicle is a lane-changeable area, setting the size of the reference area as a fixed value and setting the position of the reference area as a variable dynamically adjusted according to the lane in which the vehicle is driving, and then adjusting the reference area according to the size and speed of the vehicle body.
9. The method of claim 8, wherein said adjusting the reference area based on body size and speed of the vehicle comprises:
determining an expansion ratio according to the body size of the vehicle, wherein the expansion ratio is in direct proportion to the body size;
performing expansion processing on the reference area according to the expansion ratio;
and performing offset processing on the position of the reference area in the vertical direction according to the speed of the vehicle.
10. The method of any one of claims 1 to 9, further comprising, prior to determining a first pixel coordinate of the vehicle in an image captured by the subject camera if it is detected that the vehicle has entered the snap-shot area:
assigning a unique identification code to the vehicle when the vehicle is detected by a lidar;
after sending a snapshot instruction carrying the second pixel coordinate to the target camera to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target image, which is in the target detection frame, to obtain license plate information of the vehicle, the method further includes:
receiving the license plate information sent by the target camera;
and binding the license plate information and the unique identification code.
11. A license plate detection device, characterized by comprising:
the first pixel coordinate determination module is used for determining a first pixel coordinate of the vehicle in an image shot by a target camera if the vehicle is detected to enter a snapshot area;
the second pixel coordinate determination module is used for determining a second pixel coordinate of a target detection frame surrounding the license plate of the vehicle in an image obtained by the target camera according to the first pixel coordinate;
and the snapshot instruction sending module is used for sending a snapshot instruction carrying the second pixel coordinate to the target camera so as to control the target camera to shoot a target image of the snapshot area, and performing license plate detection on an image in the target image, which is located in the target detection frame, so as to obtain license plate information of the vehicle.
12. A terminal device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the license plate detection method according to any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the license plate detection method according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211529879.XA CN115731224B (en) | 2022-11-30 | 2022-11-30 | License plate detection method and device, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211529879.XA CN115731224B (en) | 2022-11-30 | 2022-11-30 | License plate detection method and device, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115731224A true CN115731224A (en) | 2023-03-03 |
CN115731224B CN115731224B (en) | 2023-10-10 |
Family
ID=85299716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211529879.XA Active CN115731224B (en) | 2022-11-30 | 2022-11-30 | License plate detection method and device, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115731224B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116434563A (en) * | 2023-03-13 | 2023-07-14 | 山东华夏高科信息股份有限公司 | Method, system, equipment and storage medium for detecting vehicle overguard |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013168737A (en) * | 2012-02-14 | 2013-08-29 | Toyota Motor Corp | Exposure control device |
CN105654084A (en) * | 2015-12-29 | 2016-06-08 | 北京万集科技股份有限公司 | Laser-based vehicle license plate positioning method, device and system |
CN110910651A (en) * | 2019-11-08 | 2020-03-24 | 北京万集科技股份有限公司 | License plate information matching method and system, storage medium and electronic device |
CN112131328A (en) * | 2020-08-14 | 2020-12-25 | 深圳市麦谷科技有限公司 | Vehicle management method and device, terminal equipment and storage medium |
CN112562346A (en) * | 2020-11-27 | 2021-03-26 | 天地伟业技术有限公司 | Method for confirming position of target area when intelligent ball-breaking machine shoots close-range picture |
CN114332785A (en) * | 2021-12-31 | 2022-04-12 | 航天科工智慧产业发展有限公司 | Face matting method, device, equipment and storage medium based on bayonet camera |
-
2022
- 2022-11-30 CN CN202211529879.XA patent/CN115731224B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013168737A (en) * | 2012-02-14 | 2013-08-29 | Toyota Motor Corp | Exposure control device |
CN105654084A (en) * | 2015-12-29 | 2016-06-08 | 北京万集科技股份有限公司 | Laser-based vehicle license plate positioning method, device and system |
CN110910651A (en) * | 2019-11-08 | 2020-03-24 | 北京万集科技股份有限公司 | License plate information matching method and system, storage medium and electronic device |
CN112131328A (en) * | 2020-08-14 | 2020-12-25 | 深圳市麦谷科技有限公司 | Vehicle management method and device, terminal equipment and storage medium |
CN112562346A (en) * | 2020-11-27 | 2021-03-26 | 天地伟业技术有限公司 | Method for confirming position of target area when intelligent ball-breaking machine shoots close-range picture |
CN114332785A (en) * | 2021-12-31 | 2022-04-12 | 航天科工智慧产业发展有限公司 | Face matting method, device, equipment and storage medium based on bayonet camera |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116434563A (en) * | 2023-03-13 | 2023-07-14 | 山东华夏高科信息股份有限公司 | Method, system, equipment and storage medium for detecting vehicle overguard |
Also Published As
Publication number | Publication date |
---|---|
CN115731224B (en) | 2023-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11131752B2 (en) | Vehicle navigation system using pose estimation based on point cloud | |
CN110174093B (en) | Positioning method, device, equipment and computer readable storage medium | |
US8238610B2 (en) | Homography-based passive vehicle speed measuring | |
JP2020525809A (en) | System and method for updating high resolution maps based on binocular images | |
CN111754581A (en) | Camera calibration method, roadside sensing equipment and intelligent traffic system | |
CN111025308B (en) | Vehicle positioning method, device, system and storage medium | |
CA3027921A1 (en) | Integrated sensor calibration in natural scenes | |
CN103514746B (en) | Vehicle speed measurement method based on DSRC, device and DSRC application system | |
CN113465608B (en) | Road side sensor calibration method and system | |
CN115731224B (en) | License plate detection method and device, terminal equipment and storage medium | |
CN111539305B (en) | Map construction method and system, vehicle and storage medium | |
CN114495512A (en) | Vehicle information detection method and system, electronic device and readable storage medium | |
CN116884235B (en) | Video vehicle speed detection method, device and equipment based on wire collision and storage medium | |
CN113192217A (en) | Fee evasion detection method, fee evasion detection device, computer equipment and medium | |
CN110444026B (en) | Triggering snapshot method and system for vehicle | |
CN102622888B (en) | Self-learning identification method and device for licence plate based on video detection | |
CN115272490B (en) | Method for calibrating camera of road-end traffic detection equipment | |
US20230394679A1 (en) | Method for measuring the speed of a vehicle | |
US20240071034A1 (en) | Image processing device, image processing method, and program | |
CN114113669A (en) | Vehicle speed measuring method and device based on multi-focal-length camera | |
CN114022867A (en) | Channel delay acquisition method and device, electronic equipment and storage medium | |
CN113902805A (en) | Method and system for calibrating road side equipment, road side equipment and calibration vehicle | |
CN116486342A (en) | Identity binding method, device, terminal equipment and storage medium | |
CN118818447A (en) | Combined calibration method based on road side radar and camera | |
CN115862345A (en) | Method and device for determining vehicle speed, computer storage medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |