CN111462502B - Method, device and computer readable storage medium for vehicle management - Google Patents

Method, device and computer readable storage medium for vehicle management Download PDF

Info

Publication number
CN111462502B
CN111462502B CN201910059669.0A CN201910059669A CN111462502B CN 111462502 B CN111462502 B CN 111462502B CN 201910059669 A CN201910059669 A CN 201910059669A CN 111462502 B CN111462502 B CN 111462502B
Authority
CN
China
Prior art keywords
vehicle
target vehicle
license plate
camera
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910059669.0A
Other languages
Chinese (zh)
Other versions
CN111462502A (en
Inventor
罗义平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910059669.0A priority Critical patent/CN111462502B/en
Publication of CN111462502A publication Critical patent/CN111462502A/en
Application granted granted Critical
Publication of CN111462502B publication Critical patent/CN111462502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles

Abstract

The invention discloses a vehicle management method, a vehicle management device and a computer readable storage medium, and belongs to the field of intelligent transportation. The method comprises the following steps: and when the vehicle information sent by at least one camera in the plurality of cameras comprises the vehicle information of the target vehicle, determining the complete track of the target vehicle on the straight-line section part of the road according to the vehicle information of the target vehicle and the machine number of the at least one camera. And acquiring at least one event occurrence process diagram of the target vehicle, determining an event forensics diagram of the target vehicle according to the complete track and the at least one event occurrence process diagram, and correspondingly storing the event forensics diagram of the target vehicle and the license plate information of the target vehicle. Therefore, the target vehicle can be effectively managed according to the event forensics picture and the license plate information in the vehicle information.

Description

Method, device and computer readable storage medium for vehicle management
Technical Field
The present invention relates to the field of intelligent transportation, and in particular, to a method and an apparatus for vehicle management, and a computer-readable storage medium.
Background
With the increasing number of vehicles in China, the problems of difficult parking and illegal parking are increasingly prominent, and in order to alleviate the problems, a vehicle management method is urgently needed to realize automatic management of the vehicles.
At present, a plurality of parking spaces are usually divided at the side of a road, a plurality of cameras are arranged near the parking spaces, and vehicles positioned at the parking spaces and near the parking spaces are monitored through the cameras so as to identify the license plate numbers of the vehicles and determine whether the vehicles have a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, so that vehicle management is performed according to the license plate numbers of the vehicles and the events generated by the vehicles. For example, taking the camera a and the vehicle B in the monitoring range of the camera a as an example, the camera a can monitor the vehicle B. And then, determining whether a vehicle B has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle stopping event according to the monitoring video, acquiring a video frame image comprising the vehicle B from the monitoring video, and identifying the license plate number of the vehicle B from the acquired video frame image.
However, for a certain camera, if a vehicle in the monitoring range of the camera is blocked, the license plate is seriously inclined, or the distance is long, so that the license plate is small, the license plate cannot be recognized, and thus the vehicle is difficult to be effectively managed.
Disclosure of Invention
The invention provides a method and a device for vehicle management and a computer readable storage medium, which can solve the problem that for a certain camera, if a vehicle in the monitoring range of the camera is blocked or the license plate is small and cannot be identified due to long distance, the vehicle is difficult to be effectively managed.
In a first aspect, a method of vehicle management is provided, the method comprising:
receiving vehicle information and a machine number of each camera in a plurality of cameras, wherein the vehicle information at least comprises license plate information, the cameras are arranged on a straight line section of a road along the direction of the road, the monitoring ranges of two adjacent cameras in the cameras are overlapped, and each camera is provided with a unique machine number;
when the vehicle information sent by at least one camera in the plurality of cameras comprises the vehicle information of a target vehicle, determining a complete track of the target vehicle on the straight-line section part of the road according to the vehicle information of the target vehicle and the machine number of the at least one camera in the vehicle information sent by the at least one camera, wherein the target vehicle is any vehicle currently subjected to vehicle management;
acquiring at least one event occurrence process diagram of the target vehicle, wherein the at least one event occurrence process diagram is obtained by shooting when any one of the plurality of cameras detects that the target vehicle has a vehicle storage event, a vehicle delivery event or a vehicle illegal parking event in the straight-line section of the road;
and determining an event forensics graph of the target vehicle according to the complete track and the at least one event occurrence process graph, and correspondingly storing the event forensics graph of the target vehicle and the license plate information of the target vehicle.
In one possible implementation manner, the vehicle information of the target vehicle further includes calibration coordinates of the target vehicle in a calibration coordinate system;
the determining a complete track of the target vehicle on the straight-line section of the road according to the vehicle information of the target vehicle and the machine number of the at least one camera in the vehicle information sent by the at least one camera comprises:
according to the machine number of the at least one camera, converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into at least one corresponding splicing coordinate in a splicing coordinate system, wherein the splicing coordinate system is a coordinate system used for drawing the complete track;
determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera;
when the target vehicle is in current concatenation coordinate and the preceding concatenation coordinate in the concatenation coordinate system are different, in the concatenation coordinate system the preceding concatenation coordinate with link between the current concatenation coordinate, return and receive the step of vehicle information and self machine number that each camera in a plurality of cameras sent, until the target vehicle is in current concatenation coordinate in the concatenation coordinate system is the same with the preceding concatenation coordinate, will the target vehicle is in obtain behind the link between current concatenation coordinate and the preceding concatenation coordinate in the concatenation coordinate system the complete orbit of target vehicle.
In one possible implementation, the machine numbers of the plurality of cameras are increased one by one starting from 0;
the converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into corresponding at least one stitching coordinate in a stitching coordinate system according to the machine number of the at least one camera includes:
and taking the abscissa of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera as the abscissa of the target vehicle in the splicing coordinate system, and correspondingly adding the ordinate of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera to obtain the ordinate of the target vehicle in the splicing coordinate system.
In a possible implementation manner, the determining, according to the license plate information of the target vehicle and the at least one stitching coordinate in the vehicle information sent by the at least one camera, a current stitching coordinate of the target vehicle in the stitching coordinate system includes:
when the number of the at least one splicing coordinate is two, and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are completely the same, determining a splicing coordinate corresponding to a first camera as a current splicing coordinate of the target vehicle in the splicing coordinate system, and comparing with other cameras in the at least one camera, wherein the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
when the number of the at least one splicing coordinate is two, and characters of license plate numbers included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are not identical, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate;
when the determined matching degree is within a preset matching degree range, determining the distance between the at least one splicing coordinate;
and when the distance between the at least one splicing coordinate is smaller than a preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
In a possible implementation manner, before the converting, according to the machine number of the at least one camera, the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into the corresponding at least one stitching coordinate in the stitching coordinate system, the method further includes:
receiving a reference image shot by each camera in the plurality of cameras and a machine number of the camera, wherein each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and the reference images shot by two adjacent cameras have the same calibration line;
and establishing the splicing coordinate system according to the machine numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the received multiple reference images.
In a possible implementation manner, the establishing the stitching coordinate system according to the machine numbers of the multiple cameras, and two calibration lines and at least two lane lines included in each of the received multiple reference images includes:
according to the machine numbers of the cameras, the same calibration lines in the reference images shot by two adjacent cameras in the cameras are overlapped, and the leftmost lane lines in the reference images are connected to obtain a vertical connecting line;
determining the direction of the machine numbers of the plurality of cameras from small to large as the direction of the vertical connecting line;
acquiring a bottommost calibration line in a reference image shot by a camera with the smallest machine number, and determining the horizontal right direction as the direction of the bottommost calibration line;
and taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
In one possible implementation, the determining an event forensic map of the target vehicle according to the complete trajectory and the at least one event occurrence process map includes:
selecting a first event occurrence process diagram from the at least one event occurrence process diagram, wherein the first event occurrence process diagram is an event occurrence process diagram with the largest size of a pixel area occupied by the target vehicle in the at least one event occurrence process diagram;
intercepting a license plate expansion area from the first event occurrence process image, and determining the intercepted license plate expansion area as a vehicle sketch image of the target vehicle, wherein the license plate expansion area is an area which comprises the head or tail of the target vehicle after being expanded according to the license plate area;
acquiring a license plate recognition map and an overlapped area map of the target vehicle, wherein the license plate recognition map refers to an image which can be used for recognizing the license plate number for the last time before the complete track is formed when the target vehicle has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, and the overlapped area map refers to a video frame image which is shot when the target vehicle is located at the overlapped part of the monitoring ranges of two adjacent cameras for the last time before the complete track when the target vehicle has the vehicle warehousing event, the vehicle ex-warehouse event or the vehicle illegal parking event;
and determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map and the overlapping area map as the event forensics map.
In one possible implementation manner, the acquiring the license plate recognition map and the overlap area map of the target vehicle includes:
determining the number of cameras that can monitor the target vehicle the last time before the complete trajectory is formed;
when the number of the cameras is one, extracting the license plate recognition image and the overlapping area image from the video frame image shot by the camera which can monitor the target vehicle at the last time;
and when the number of the cameras is two, extracting the license plate recognition image and the overlapping area image from a video frame image shot by any one camera which can monitor the target vehicle at the last time.
In one possible implementation, the determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map, and the overlap region map as the event forensics map includes:
and combining the at least one event occurrence process graph, the vehicle close-up graph, the license plate recognition graph and the overlapping area graph into one image, and taking the combined image as the event evidence obtaining graph.
In a second aspect, there is provided an apparatus for vehicle management, the apparatus comprising:
the system comprises a receiving module, a processing module and a display module, wherein the receiving module is used for receiving vehicle information and a machine number of each camera in a plurality of cameras, the vehicle information at least comprises license plate information, the cameras are arranged on a straight line section part of a road along the trend of the road, the monitoring ranges of two adjacent cameras in the cameras are overlapped, and each camera is provided with a unique machine number;
the determining module is used for determining a complete track of the target vehicle on the straight line section of the road according to the vehicle information of the target vehicle and the machine number of at least one camera in the vehicle information sent by the at least one camera when the vehicle information sent by the at least one camera in the plurality of cameras comprises the vehicle information of the target vehicle, wherein the target vehicle is any one vehicle which is currently subjected to vehicle management;
the acquisition module is used for acquiring at least one event occurrence process chart of the target vehicle, wherein the at least one event occurrence process chart is obtained by shooting when any one camera in the plurality of cameras detects that the target vehicle has a vehicle warehousing event, a vehicle ex-warehousing event or a vehicle illegal parking event in the straight-line section of the road;
and the storage module is used for determining the event forensic image of the target vehicle according to the complete track and the at least one event occurrence process image and correspondingly storing the event forensic image of the target vehicle and the license plate information of the target vehicle.
In one possible implementation manner, the vehicle information of the target vehicle further includes a calibration coordinate of the target vehicle in a calibration coordinate system, and the determining module includes:
the conversion submodule is used for converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into at least one corresponding splicing coordinate in a splicing coordinate system according to the machine number of the at least one camera, wherein the splicing coordinate system is a coordinate system used for drawing the complete track;
the first determining submodule is used for determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera;
and the connecting sub-module is used for returning and receiving the vehicle information sent by each camera in the plurality of cameras and the step of the machine number of the target vehicle when the target vehicle is in the condition that the current splicing coordinate and the previous splicing coordinate in the splicing coordinate system are different, until the target vehicle is in the condition that the current splicing coordinate and the previous splicing coordinate in the splicing coordinate system are the same, and obtaining the complete track of the target vehicle after the current splicing coordinate and the previous splicing coordinate in the splicing coordinate system are connected.
In one possible implementation, the machine numbers of the plurality of cameras are increased one by one starting from 0;
the conversion sub-module is further configured to use an abscissa of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera as an abscissa of the target vehicle in the stitching coordinate system, and add a ordinate of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera correspondingly to obtain an ordinate of the target vehicle in the stitching coordinate system.
In one possible implementation, the first determining sub-module includes:
a first determining unit, configured to determine, when the number of the at least one stitching coordinate is two and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one stitching coordinate are completely the same, a stitching coordinate corresponding to a first camera as a current stitching coordinate of the target vehicle in the stitching coordinate system, where a size of a pixel area occupied by the target vehicle in a video frame image captured by the first camera is the largest compared with other cameras in the at least one camera;
the second determining unit is used for determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate when the number of the at least one splicing coordinate is two and the characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are not identical;
the third determining unit is used for determining the distance between the at least one splicing coordinate when the determined matching degree is within a preset matching degree range;
and the fourth determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the distance between the at least one splicing coordinate is smaller than the preset distance.
In one possible implementation manner, the determining module further includes:
the receiving submodule is used for receiving a reference image shot by each camera in the plurality of cameras and a machine number of the receiving submodule, each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and the establishing submodule is used for establishing the splicing coordinate system according to the machine numbers of the cameras, and the two calibration lines and at least two lane lines included in each of the received multiple reference images.
In one possible implementation, the establishing sub-module includes:
the connecting unit is used for superposing the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the machine numbers of the cameras, and connecting the leftmost lane lines in the reference images to obtain a vertical connecting line;
a fifth determining unit, configured to determine a direction in which the machine numbers of the plurality of cameras are from small to large as a direction of the vertical connecting line;
a sixth determining unit configured to acquire a lowermost calibration line in a reference image captured by a camera having a smallest machine number, and determine a horizontal rightward direction as a direction of the lowermost calibration line;
and the establishing unit is used for taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
In one possible implementation, the storage module includes:
the selecting submodule is used for selecting a first event occurrence process diagram from the at least one event occurrence process diagram, and the first event occurrence process diagram is an event occurrence process diagram with the largest pixel area occupied by the target vehicle in the at least one event occurrence process diagram;
the intercepting submodule is used for intercepting a license plate expansion area from the first event occurrence process image and determining the intercepted license plate expansion area as a vehicle sketch image of the target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
the acquisition sub-module is used for acquiring a license plate identification map and an overlapped area map of the target vehicle, wherein the license plate identification map is an image which can identify the license plate number for the last time before the complete track is formed when the target vehicle has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, and the overlapped area map is a video frame image which is shot when the target vehicle has the overlapping part of the monitoring ranges of two adjacent cameras for the last time before the complete track when the target vehicle has the vehicle warehousing event, the vehicle ex-warehouse event or the vehicle illegal parking event;
a second determining submodule, configured to determine the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map, and the overlap area map as the event forensics map.
In one possible implementation, the obtaining sub-module includes:
a seventh determining unit, configured to determine the number of cameras that can monitor the target vehicle the last time before the complete trajectory is formed;
a first extraction unit, configured to extract the license plate recognition map and the overlap area map from the video frame image captured by the camera capable of monitoring the target vehicle for the last time when the number of cameras is one;
a second extraction unit, configured to extract the license plate recognition map and the overlap area map from a video frame image captured by any one of the cameras that can monitor the target vehicle at the last time when the number of the cameras is two.
In one possible implementation, the second determining sub-module includes:
and the synthesis unit is used for synthesizing the at least one event occurrence process graph, the vehicle close-up graph, the license plate recognition graph and the overlapping region graph into one image and taking the synthesized image as the event evidence obtaining graph.
In a third aspect, there is provided an apparatus for vehicle management, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect described above.
In a fourth aspect, a computer-readable storage medium is provided, having instructions stored thereon, which when executed by a processor, implement the steps of any of the methods of the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of any of the first aspects above.
The technical scheme provided by the embodiment of the invention at least has the following beneficial effects:
because the monitoring ranges of two adjacent cameras in the multiple cameras are overlapped, and the vehicle information sent by each camera at least comprises the license plate information of the vehicle, when the vehicle information sent by at least one camera in the multiple cameras comprises the vehicle information of the target vehicle, the license plate number of the target vehicle can be determined by combining the vehicle information sent by at least one camera, so that the problems that the license plate shot by a single camera is blocked, the license plate is seriously inclined or the license plate cannot be recognized due to smaller license plate caused by longer distance are avoided. And determining the complete track of the target vehicle on the straight-line section of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera. And acquiring at least one event occurrence process diagram of the target vehicle, determining an event forensics diagram of the target vehicle according to the complete track and the at least one event occurrence process diagram, and correspondingly storing the event forensics diagram of the target vehicle and the license plate information of the target vehicle. That is, the vehicle information of the target vehicle sent by all the cameras capable of monitoring the target vehicle is combined to form a complete track of the target vehicle on a straight-line section of the road, and then an event evidence obtaining map of the target vehicle is determined. Therefore, when the target vehicle is managed, the occurrence of a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle stopping violation event of the target vehicle can be confirmed according to the event evidence obtaining graph, the license plate information of the target vehicle can be clearly known, and the effectiveness of managing the target vehicle is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by the present invention.
Fig. 2 is a schematic structural diagram of each of the plurality of cameras 101 provided by the present invention.
Fig. 3 is a schematic structural diagram of the server 102 provided in the present invention.
Fig. 4 is a flowchart of a method for vehicle management according to an embodiment of the present invention.
Fig. 5 is a flowchart of a method for vehicle management according to a second embodiment of the present invention.
Fig. 6 is a schematic diagram of establishing a calibration coordinate system according to a second embodiment of the present invention.
Fig. 7 is a schematic view of a camera mount according to a second embodiment of the present invention.
Fig. 8 is a schematic diagram of the imaging principle of the camera provided by the second embodiment of the present invention.
Fig. 9 is a schematic diagram of establishing a stitching coordinate system according to a second embodiment of the present invention.
Fig. 10 is a block diagram of a vehicle management apparatus according to a third embodiment of the present invention.
Fig. 11 is a schematic structural diagram of a server provided in the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present invention.
Before explaining the embodiments of the present invention in detail, the implementation environment of the embodiments of the present invention is described:
fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present invention, and referring to fig. 1, the implementation environment includes a plurality of cameras 101 and a server 102. The plurality of cameras 101 and the server 102 are connected through a network, and the plurality of cameras 101 include cameras 1, …, cameras i, … and cameras n.
Fig. 2 is a schematic configuration diagram of each of the plurality of cameras 101. Referring to fig. 2, the plurality of cameras 101 may be smart cameras, each of which includes a CCD (Charge-coupled Device) (not shown), a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), and a DDR (Double Data Rate) memory module.
The CCD is used for collecting video frame images and storing the collected video frame images to the DDR storage module.
The GPU comprises an image preprocessing module, a vehicle detection module and a license plate recognition module.
The image preprocessing module can acquire a video frame image from the DDR memory module, preprocess the video frame image, and store the preprocessed image into the DDR memory module.
The vehicle detection module can acquire the preprocessed image from the DDR storage module, detect the vehicle position and/or the vehicle type of the vehicle in the preprocessed image, and then store the vehicle position and/or the vehicle type into the DDR storage module. In addition, after the vehicle position is detected, the vehicle detection module may establish an ID-sc (Identity-smart camera identifier) for the vehicle at the vehicle position to distinguish from other vehicles.
The license plate recognition module can acquire the preprocessed image and the vehicle position from the DDR storage module, recognize the license plate of the vehicle at the vehicle position in the preprocessed image, obtain license plate information and store the license plate information into the DDR storage module.
The CPU includes a vehicle tracking module and an event analysis module.
The vehicle tracking module can acquire the preprocessed image, the vehicle position and the license plate information from the DDR storage module, and uses the preprocessed image, the vehicle position and the license plate information as the attribute information of each vehicle. And forming a trajectory for the vehicle and then storing the trajectory into the DDR memory module. In addition, the vehicle tracking module can also determine the ID-sc of the vehicle in the current video frame image as the ID-sc of the vehicle in the next video frame image.
The event analysis module may obtain the trajectory of the vehicle from the DDR memory module and send the event occurrence process map of the vehicle to the server 102 according to the trajectory. Additionally, the event analysis module may also send the calibration coordinates of the vehicle in the calibration coordinate system to the server 102.
Fig. 3 is a schematic diagram of the structure of the server 102. The server 102 is a server 102 providing a background service for the camera 101, and may be one server 102, a server 102 cluster composed of a plurality of servers 102, or a cloud computing server 102 center, which is not limited in this embodiment of the present invention. In the embodiment of the present invention, a server 102 is illustrated. The server 102 includes a CPU, DDR memory module and user data storage area.
The CPU comprises a data receiving module, a track splicing module and a data management module.
The data receiving module is used for receiving the vehicle information and the video frame image sent by each camera 101 and storing the vehicle information and the video frame image into the DDR storage module.
And the track splicing module is used for forming a complete track of the target vehicle and storing the complete track to the DDR storage module.
The data management module is used for determining an event forensics picture of the target vehicle and correspondingly storing the event forensics picture of the target vehicle and the license plate information of the target vehicle into the user data storage area.
An embodiment of the present invention provides a flowchart of a method for vehicle management, and referring to fig. 4, the method is applied to a server, and the method includes:
step 401: the method comprises the steps of receiving vehicle information and machine numbers of each camera in a plurality of cameras, wherein the vehicle information at least comprises license plate information, the cameras are arranged on straight line sections of a road along the trend of the road, overlapped parts exist in monitoring ranges of two adjacent cameras in the cameras, and each camera is provided with a unique machine number.
Step 402: when the vehicle information sent by at least one camera in the plurality of cameras comprises the vehicle information of the target vehicle, determining the complete track of the target vehicle on the straight line section of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera, wherein the target vehicle is any vehicle currently subjected to vehicle management.
Step 403: and acquiring at least one event occurrence process diagram of the target vehicle, wherein the at least one event occurrence process diagram is obtained by shooting when any one of the plurality of cameras detects that the target vehicle has a vehicle entering event, a vehicle leaving event or a vehicle illegal parking event in the straight-line section of the road.
Step 404: and determining an event evidence obtaining graph of the target vehicle according to the complete track and the at least one event occurrence process graph, and correspondingly storing the event evidence obtaining graph of the target vehicle and the license plate information of the target vehicle.
In one possible implementation manner, the vehicle information of the target vehicle further includes calibration coordinates of the target vehicle in a calibration coordinate system;
determining a complete track of the target vehicle on the straight-line section of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera, wherein the complete track comprises:
according to the machine number of at least one camera, converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into at least one corresponding splicing coordinate in a splicing coordinate system, wherein the splicing coordinate system is a coordinate system used for drawing the complete track;
determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera;
and when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting the previous splicing coordinate in the splicing coordinate system with the current splicing coordinate, returning to the step of receiving the vehicle information and the machine number of each camera in the plurality of cameras until the current splicing coordinate of the target vehicle in the splicing coordinate system is the same as the previous splicing coordinate, and connecting the current splicing coordinate of the target vehicle in the splicing coordinate system with the previous splicing coordinate to obtain the complete track of the target vehicle.
In one possible implementation, the machine numbers of the plurality of cameras are increased one by one starting from 0;
the step of converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into corresponding at least one stitching coordinate in a stitching coordinate system according to the machine number of the at least one camera includes:
and taking the abscissa of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera as the abscissa of the target vehicle in the splicing coordinate system, and correspondingly adding the ordinate of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera to obtain the ordinate of the target vehicle in the splicing coordinate system.
In one possible implementation manner, determining the current stitching coordinate of the target vehicle in the stitching coordinate system according to the license plate information of the target vehicle and the at least one stitching coordinate in the vehicle information sent by the at least one camera includes:
when the number of the at least one splicing coordinate is two, and characters of the license plate number included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are completely the same, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in a splicing coordinate system, and comparing with other cameras in the at least one camera, wherein the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
when the number of at least one splicing coordinate is two, and characters of license plate numbers included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are not identical, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate;
when the determined matching degree is within a preset matching degree range, determining the distance between at least one splicing coordinate;
and when the distance between at least one splicing coordinate is smaller than the preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
In a possible implementation manner, before the converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into the corresponding at least one stitching coordinate in the stitching coordinate system according to the machine number of the at least one camera, the method further includes:
receiving a reference image shot by each camera in the plurality of cameras and a machine number of the camera, wherein each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and the reference images shot by two adjacent cameras have the same calibration line;
and establishing the splicing coordinate system according to the machine numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the received multiple reference images.
In one possible implementation, the establishing the stitching coordinate system according to the machine numbers of the multiple cameras, and the two calibration lines and the at least two lane lines included in each of the received multiple reference images includes:
according to the machine numbers of the cameras, the same calibration lines in the reference images shot by two adjacent cameras in the cameras are overlapped, and the leftmost lane lines in the reference images are connected to obtain a vertical connecting line;
determining the directions of the machine numbers of the cameras from small to large as the direction of the vertical connecting line;
acquiring a bottommost calibration line in a reference image shot by a camera with the smallest machine number, and determining the horizontal right direction as the direction of the bottommost calibration line;
and taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
In one possible implementation, determining an event forensic map of the target vehicle based on the complete trajectory and the at least one event occurrence process map comprises:
selecting a first event occurrence process diagram from the at least one event occurrence process diagram, wherein the first event occurrence process diagram is an event occurrence process diagram with the largest size of a pixel area occupied by the target vehicle in the at least one event occurrence process diagram;
intercepting a license plate expansion area from the first event occurrence process image, and determining the intercepted license plate expansion area as a vehicle sketch image of a target vehicle, wherein the license plate expansion area is an area which comprises the head or tail of the target vehicle after being expanded according to the license plate area;
acquiring a license plate recognition image and an overlapped area image of a target vehicle, wherein the license plate recognition image refers to an image which can recognize the license plate number for the last time before a complete track is formed when the target vehicle has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, and the overlapped area image refers to a video frame image which is shot when the target vehicle is located at the overlapped part of the monitoring ranges of two adjacent cameras for the last time before the complete track when the target vehicle has the vehicle warehousing event, the vehicle ex-warehouse event or the vehicle illegal parking event;
and determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map and the overlapping area map as the event forensics map.
In one possible implementation manner, the acquiring the license plate recognition map and the overlap area map of the target vehicle includes:
determining the number of cameras that can monitor the target vehicle the last time before the complete trajectory is formed;
when the number of the cameras is one, extracting the license plate recognition image and the overlapping area image from the video frame image shot by the camera which can monitor the target vehicle at the last time;
when the number of the cameras is two, the license plate recognition image and the overlapping area image are extracted from the video frame image shot by any one of the cameras capable of monitoring the target vehicle at the last time.
In one possible implementation, the determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map, and the overlap region map as the event forensics map includes:
and combining the at least one event occurrence process picture, the vehicle close-up picture, the license plate recognition picture and the overlapping area picture into one image, and taking the combined image as the event evidence obtaining picture.
In the embodiment of the invention, because the monitoring ranges of two adjacent cameras in the multiple cameras are overlapped, and the vehicle information sent by each camera at least comprises the license plate information of the vehicle, when the vehicle information sent by at least one camera in the multiple cameras comprises the vehicle information of the target vehicle, the license plate number of the target vehicle can be determined by combining the vehicle information sent by the at least one camera, so that the problems that the license plate shot by a single camera is blocked, the license plate is seriously inclined or the license plate is smaller due to longer distance and cannot be recognized are solved. And determining the complete track of the target vehicle on the straight-line section of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera. And acquiring at least one event occurrence process diagram of the target vehicle, determining an event forensics diagram of the target vehicle according to the complete track and the at least one event occurrence process diagram, and correspondingly storing the event forensics diagram of the target vehicle and the license plate information of the target vehicle. That is, the vehicle information of the target vehicle sent by all the cameras capable of monitoring the target vehicle is combined to form a complete track of the target vehicle on a straight-line section of the road, and then an event evidence obtaining map of the target vehicle is determined. Therefore, when the target vehicle is managed, the occurrence of a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle stopping violation event of the target vehicle can be confirmed according to the event evidence obtaining graph, the license plate information of the target vehicle can be clearly known, and the effectiveness of managing the target vehicle is greatly improved.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
The embodiment of the invention provides a flow chart of a vehicle management method. The embodiment shown in fig. 4 will be explained in an expansion manner, referring to fig. 5, and the method is applied to a server and includes:
step 501: the server receives vehicle information and the machine number of the server, wherein the vehicle information and the machine number of the server are sent by each camera of the plurality of cameras, and the vehicle information at least comprises license plate information.
The plurality of cameras are arranged on a straight line section of the road along the direction of the road, and the monitoring ranges of two adjacent cameras in the plurality of cameras have overlapped parts. In addition, each camera is provided with a unique machine number. The unique machine number of each camera may be sent to the server by each camera through a dedicated channel between each camera and the server in advance, or the machine number may be carried in a video frame image captured by each camera, so that the machine number of each camera is also sent to the server while the video frame image is sent to the server, which is not limited in the embodiment of the present invention.
In addition, for each camera in the plurality of cameras, the camera can acquire license plate information of the vehicle in the process of monitoring the vehicle in the monitoring range, and further sends the license plate information to the server as the vehicle information, and the following detailed steps are taken for the camera to acquire the license plate information:
step 5011, the camera collects video frame images, and preprocesses the video frame images to obtain preprocessed images.
The camera can preprocess the collected video frame image through preprocessing modes such as format conversion, down-sampling processing and the like. The format of the video frame image subsequently used by the camera may be different from the format of the original video frame image which is just collected, so that the camera can convert the format of the collected original video frame image into the format which can be subsequently used. The down-sampling process is a processing method in which the number of sampling points of a video frame image is reduced in order to improve the efficiency of processing the video frame image by the video camera.
It should be noted that, preprocessing the video frame image is an optional step, that is, after the video frame image is captured by the camera, the vehicle position can be directly detected from the captured video frame image without preprocessing.
In step 5012, the camera performs vehicle detection on the preprocessed images to determine the vehicle position of each vehicle in the preprocessed images.
The camera can detect the position of the vehicle by adopting a deep learning method.
In one possible implementation, the camera may also detect the vehicle type of the vehicle from the preprocessed image by using a deep learning method.
And 5013, carrying out license plate recognition on the preprocessed image by the camera according to the position of the vehicle to obtain initial license plate information.
The initial license plate information comprises a license plate position, a license plate number and reliability. Optionally, the initial license plate information may further include a license plate type. The camera can perform license plate recognition on the preprocessed image by adopting a deep learning method to obtain the position, number and credibility of the license plate, and determine the type of the license plate by adopting a color-based license plate recognition algorithm.
It should be noted that the credibility is the credibility of the license plate number in the initial license plate information obtained by the license plate recognition by the camera, and then the final credible license plate number is determined according to the credibility. Therefore, the initial license plate information obtained by the camera through license plate recognition includes the reliability, but the license plate information sent to the server may not include the reliability. That is, the license plate information sent by the camera to the server includes a license plate position and a license plate number, and optionally, the license plate information may further include a license plate type.
When the camera determines the final credible license plate number according to the credibility, a credibility threshold value can be preset. When the credibility in the initial license plate information is greater than the credibility threshold, the camera determines the license plate number corresponding to the credibility greater than the credibility threshold as the final credible license plate number, and establishes a corresponding relation between the final credible license plate number and the ID-sc of the vehicle. And when the credibility obtained by the license plate recognition is smaller than or equal to the credibility threshold, the camera deletes the license plate number corresponding to the credibility smaller than or equal to the credibility threshold, and determines the ID-sc of the vehicle corresponding to the credibility smaller than or equal to the credibility threshold. And then, determining the final credible license plate number of the vehicle according to the ID-sc of the vehicle and the corresponding relation between the final credible license plate number and the ID-sc of the vehicle. Wherein the confidence level may be represented by a number between 0 and 100. For example, when the confidence level is between 0 and 50, it may indicate that the license plate number is not trusted; when the credibility is between 50-100, the credibility of the license plate number can be represented. Of course, the reliability may also be expressed by percentage, which is not limited in the embodiment of the present invention.
It should be noted that, when the camera determines the license plate type, the camera may use a color-based license plate recognition algorithm to recognize the license plate ground color and the color of the license plate number, and match the recognized license plate ground color and the color of the license plate number with the license plate type rule, where the matching result is the license plate type of the license plate. The license plate type rule comprises the corresponding relation between the license plate base color and the color of the license plate number and the license plate type. For example, the colors of the identified license plate ground color and the license plate number are blue and white respectively, namely blue and white characters, and the corresponding license plate type is the license plate of a common small vehicle. The identified bottom color of the license plate and the color of the license plate number are yellow and black respectively, namely yellow and black characters, and the corresponding license plate type is the license plate of a large-sized vehicle.
In addition, in a possible implementation manner, the camera detects the vehicle type while detecting the vehicle position through the step 5012, at this time, since the vehicle type is already determined, the initial license plate information may include only the license plate position, the license plate number, and the reliability when recognizing the license plate. In another possible implementation manner, the camera may detect only the vehicle position and not the vehicle type in step 5012, at this time, since the vehicle type is not determined yet, when the license plate is recognized, the initial license plate information may include the license plate position, the license plate number, the reliability, and the license plate type, and the vehicle type is determined by the license plate type. The embodiment of the present invention is not limited thereto. When the vehicle type is determined according to the license plate type, the vehicle type corresponding to the license plate type can be determined according to the corresponding relation between the license plate type and the vehicle type, for example, the license plate of a normal small vehicle corresponds to a normal small vehicle, and the license plate of a large vehicle corresponds to a large vehicle.
Step 502: when the vehicle information sent by at least one camera in the plurality of cameras comprises the vehicle information of the target vehicle, the server converts the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into at least one corresponding splicing coordinate in a splicing coordinate system according to the machine number of the at least one camera.
The vehicle information of the target vehicle further comprises a calibration coordinate of the target vehicle in a calibration coordinate system, the splicing coordinate system is a coordinate system used for drawing the complete track, and the target vehicle is any vehicle currently subjected to vehicle management.
When vehicle management is performed, there may be vehicle information of a target vehicle or there may be no vehicle information of the target vehicle among the vehicle information transmitted from the plurality of cameras to the server. Therefore, the server may detect the vehicle information of the target vehicle from the vehicle information transmitted from the plurality of cameras and determine the machine number of at least one camera that transmits the vehicle information of the target vehicle.
In a possible implementation manner, when the server converts the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into the corresponding at least one mosaic coordinate in the mosaic coordinate system, the abscissa of the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera may be taken as the abscissa of the target vehicle in the mosaic coordinate system, and the ordinate of the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera are correspondingly added to obtain the ordinate of the target vehicle in the mosaic coordinate system. Wherein the machine numbers of the plurality of cameras are increased one by one starting from 0.
For example, if the target vehicle calibration coordinates transmitted by camera No. 0 are (0.5,0.6), the target vehicle calibration coordinates are converted to the stitching coordinates (0.5,0.6) in the stitching coordinate system. And if the calibration coordinate of the target vehicle sent by the camera No. 1 is (0.5,0.6), the splicing coordinate converted into the splicing coordinate system is (0.5, 1.6). And if the calibration coordinate of the target vehicle sent by the camera No. 2 is (0.5,0.6), converting to the splicing coordinate system with splicing coordinates of (0.5,2.6), and so on.
It should be noted that, during the process of monitoring the target vehicle, at least one camera may also obtain the calibration coordinates of the target vehicle, and then send the calibration coordinates to the server as vehicle information.
The at least one camera can convert the vehicle position in the preprocessed image from the image coordinate system to the calibration coordinate system to obtain the calibration coordinates of the target vehicle in the calibration coordinate system. Optionally, the at least one camera may first establish an image coordinate system and a calibration coordinate system in the preprocessed image, determine an image coordinate of the target vehicle in the image coordinate system according to the vehicle position of the target vehicle in the preprocessed image, and then convert the image coordinate of the target vehicle into the calibration coordinate system, so as to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
When a calibration coordinate system is established, the camera can randomly select a frame of preprocessed image as a reference image, wherein the reference image comprises two calibration lines which are distributed up and down and at least two lane lines which are distributed left and right; then, taking the leftmost lane line taking the vertical direction as the upward direction as the vertical axis of the calibration coordinate system, and taking the lowest calibration line taking the horizontal direction as the rightward direction as the horizontal axis; and establishing a calibration coordinate system according to the distance between the two calibration lines and the distance between the two adjacent lane lines.
As shown in fig. 6, fig. 6 is a schematic diagram of establishing a calibration coordinate system, where two calibration lines and three lane lines are taken as an example, the two calibration lines are respectively calibration line 1 and calibration line 2, the three lane lines are respectively lane line 1, lane line 2 and lane line 3, a width between lane line 1 and lane line 2 is a width of a first lane, and a width between lane line 2 and lane line 3 is a width of a second lane. In the calibration coordinate system, the coordinates of the intersection between the horizontal axis of the calibration coordinate system and the vertical axis of the calibration coordinate system are (0,0), the coordinates of the intersection between the lane line 3 and the horizontal axis of the calibration coordinate system are (1,0), the coordinates of the intersection between the calibration line 2 and the vertical axis of the calibration coordinate system are (0,1), and the coordinates of the intersection between the lane line 3 and the calibration line 2 are (1, 1). The dashed line is lane line 2 and the bounding box is used to represent a plurality of different vehicles including the target vehicle.
After the calibration coordinate system is established, the camera may represent the target vehicle by a lower boundary center point of a boundary frame of the target vehicle, and determine the calibration coordinates of the target vehicle through the following steps:
1. and determining the abscissa of the target vehicle in the calibration coordinate system.
The camera can determine the abscissa of the target vehicle according to the total number of lanes, the distance between the left lane line of the lane where the target vehicle is located and the target vehicle, and the width of the lane where the target vehicle is located, and the abscissa of the target vehicle is determined by the following formula I:
the formula I is as follows:
Figure BDA0001953714920000191
wherein x is the abscissa of the target vehicle, n is the total number of lanes, i is an integer not greater than n, a is the distance between the left lane line of the lane where the target vehicle is located and the target vehicle, and b is the width of the lane where the target vehicle is located.
2. And determining the ordinate of the target vehicle in the calibration coordinate system.
Fig. 7 is a schematic view of the camera mount, as shown in fig. 7, and fig. 7 corresponds to fig. 6, i.e., the viewing angle of fig. 7 is the viewing angle from the right side of fig. 6. Fig. 7 illustrates an example of specifying the ordinate of the target vehicle between the lane line 1 and the lane line 2. In fig. 7, point O is the position of the camera, point a is the point on the ground where the camera upright is located, point OA is the length of the camera upright, point C corresponds to the intersection of lane line 2 and calibration line 1 in fig. 6, point G corresponds to the intersection of lane line 2 and calibration line 2 in fig. 6, point D corresponds to the intersection of the lane line 2 and the horizontal line where the target vehicle is located in fig. 6, and the target vehicle is located between the lane line 1 and the lane line 2. L1 is the distance of AC, L2 is the distance of AG, L is the distance of AD, and d is the distance of CG.
According to fig. 7, the camera can determine the ordinate of the target vehicle in the calibration coordinate system according to the following three steps:
(1): according to the vertical distance between the calibration line 1 and the camera upright rod and the distance between the calibration line 1 and the calibration line 2, a first parameter is determined through the following formula II:
the formula II is as follows:
Figure BDA0001953714920000201
where k is the first parameter, L1 is the vertical distance between the calibration line 1 and the camera upright, and d is the distance between the calibration line 1 and the calibration line 2.
(2): determining a second parameter according to an imaging width of a distance between the first intersection point and the second intersection point, an imaging width of a distance between the third intersection point and the fourth intersection point, and an imaging width of a distance between the fifth intersection point and the sixth intersection point by the following formula three:
the formula III is as follows:
Figure BDA0001953714920000202
wherein m is a second parameter, p is an imaging width of a distance between the first intersection point and the second intersection point, q is an imaging width of a distance between the third intersection point and the fourth intersection point, and r is an imaging width of a distance between the fifth intersection point and the sixth intersection point. The first intersection point is the intersection point of the calibration line 1 and the lane line 1, and the second intersection point is the intersection point of the calibration line 1 and the lane line 2. The third intersection point is the intersection point of the calibration line 2 and the lane line 1, and the fourth intersection point is the intersection point of the calibration line 2 and the lane line 2. The fifth intersection point is the intersection point of the horizontal line of the target vehicle and the lane line 1, and the sixth intersection point is the intersection point of the horizontal line of the target vehicle and the lane line 2.
It should be noted that the distance between the first intersection point and the second intersection point, the distance between the third intersection point and the fourth intersection point, and the distance between the fifth intersection point and the sixth intersection point are the same in actual measurement, but since the calibration line 1, the calibration line 2, and the distance between the target vehicle and the camera are all different, in the camera, the imaging width of the distance between the first intersection point and the second intersection point, the imaging width of the distance between the third intersection point and the fourth intersection point, and the imaging width of the distance between the fifth intersection point and the sixth intersection point are different.
It should be further noted that, before determining the second parameter through step (2), the camera may also determine the values of p, q, and r through the following formula four-formula nine:
the formula four is as follows: OC p ogq OD r
Wherein OC is the distance between the position of the camera and the point C, OG is the distance between the position of the camera and the point G, and OD is the distance between the position of the camera and the point D.
The formula five is as follows:
Figure BDA0001953714920000203
formula six:
Figure BDA0001953714920000211
the formula seven:
Figure BDA0001953714920000212
the formula eight: l2 ═ L1+ d ═ kd + d
The formula is nine: L-L1 + CD-kd + yd
And CD is the distance between the point C and the point D, and y is the ordinate of the target vehicle in the calibration coordinate system.
It should be noted that the formula four can be obtained according to the imaging principle of the camera. As shown in fig. 8, fig. 8 is a schematic view of the imaging principle of the camera. In fig. 8, W is the actual width of the object, f is the focal length of the lens of the camera, S is the distance between the center of the lens of the camera and the object, and z is the imaging width of the object in the camera, and W ═ f ═ S ═ z is satisfied.
(3): according to the first parameter and the second parameter, determining the ordinate of the target vehicle in the calibration coordinate system by the following formula ten:
formula ten:
Figure BDA0001953714920000213
and y is the ordinate of the target vehicle in the calibration coordinate system.
And finishing determining the calibration coordinates of the target vehicle.
In addition, the server can also establish a splicing coordinate system before converting the calibration coordinates of the target vehicle sent by the at least one camera into the corresponding at least one splicing coordinate in the splicing coordinate system. Alternatively, the server may receive one reference image and its own machine number taken by each of the plurality of cameras. And establishing a splicing coordinate system according to the machine numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the received multiple reference images.
Each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and the reference images shot by two adjacent cameras have the same calibration line. The process of the server establishing the mosaic coordinate system is described by three steps as follows:
1. the server superposes the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the machine numbers of the cameras, connects the leftmost lane lines in the reference images to obtain a vertical connecting line, and determines the direction of the machine numbers of the cameras from small to large as the direction of the vertical connecting line.
Because the reference images shot by the two adjacent cameras have the same calibration line, the area corresponding to the same calibration line can be shot by the two cameras at the same time, that is, the area corresponding to the same calibration line is the overlapping part of the monitoring ranges of the two adjacent cameras. Therefore, in order to avoid the problem that the mosaic coordinate system is not accurately established due to the overlapping part when the mosaic coordinate system is established, the embodiment of the invention enables the same calibration line in the reference images shot by two adjacent cameras in the plurality of cameras to be overlapped.
2. The server acquires a lowermost calibration line in a reference image shot by a camera with the smallest machine number, and determines the horizontal right direction as the direction of the lowermost calibration line.
3. The server takes the directional vertical connecting line as a longitudinal axis of the splicing coordinate system, takes the directional lowest calibration line as a transverse axis of the splicing coordinate system, and establishes the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
It should be noted that, after determining the horizontal axis and the vertical axis of the stitching coordinate system, the server may also keep other lane lines in the reference image that are not taken as the vertical axis of the stitching coordinate system, and other calibration lines in the reference image that are not taken as the horizontal axis of the stitching coordinate system.
As shown in fig. 9, fig. 9 is a schematic diagram of establishing a stitching coordinate system. Fig. 9 is a mosaic coordinate system established from reference images (not shown) taken by two adjacent cameras, respectively camera No. 0 and camera No. 1 (not shown). The calibration line 3 is one of the reference images shot by the camera No. 0, the horizontal rightward direction is the direction of the calibration line 3, the calibration line 4 is the same calibration line in the reference images shot by the camera No. 0 and the camera No. 1, and the calibration line 5 is one of the reference images shot by the camera No. 1. The lane line 4 is a vertical connecting line obtained by connecting the leftmost lane lines in the two reference images, and the direction from the camera No. 0 to the camera No. 1 is the direction of the vertical connecting line. The lane line 5 is a lane line connecting the middle lane lines of the two reference images, and the lane line 6 is a lane line connecting the rightmost lane lines of the two reference images.
In fig. 9, the directional lane line 4 is the vertical axis of the stitching coordinate system, and the directional calibration line 3 is the horizontal axis of the stitching coordinate system. The coordinates of the intersection of the calibration line 4 and the longitudinal axis of the stitching coordinate system are (0,1), and the coordinates of the intersection of the calibration line 5 and the longitudinal axis of the stitching coordinate system are (0, 2). The coordinates of the intersection of the lane line 6 and the horizontal axis of the stitching coordinate system are (1,0), the coordinates of the intersection of the lane line 6 and the calibration line 4 are (1,1), and the coordinates of the intersection of the lane line 6 and the calibration line 5 are (1, 2). The dashed line is the lane line 5 and the bounding box is used to represent a number of different vehicles.
Step 503: the server determines the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera; and when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system, and returning to the step 501 until the current splicing coordinate of the target vehicle in the splicing coordinate system is the same as the previous splicing coordinate, and connecting the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system to obtain the complete track of the target vehicle.
Wherein the complete trajectory is a complete trajectory of the target vehicle on a straight segment portion of the road.
It should be noted that the vehicle may be located in an overlapping portion of the monitoring ranges of two adjacent cameras, that is, two adjacent cameras can simultaneously capture the vehicle located in the overlapping portion, and send the calibration coordinates of the vehicle located in the overlapping portion to the server, so as to convert the calibration coordinates into the stitching coordinates.
When the number of the at least one stitching coordinate is two, the two stitching coordinates may be coordinates of two different vehicles, or may be coordinates of the same vehicle, which is the target vehicle. At this time, the server can judge according to the distance between the license plate information and the at least one splicing coordinate. And when the server judges that the at least one splicing coordinate is two coordinates of two different vehicles, storing the at least one splicing coordinate. When the server judges that at least one splicing coordinate is two coordinates of the target vehicle, the server can determine one splicing coordinate as the current splicing coordinate of the target vehicle in a splicing coordinate system, and the determination mode can be realized through the following steps:
1. and when the number of the at least one splicing coordinate is two and the characters of the license plate number included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are completely the same, the server determines the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system. Wherein the size of the pixel area occupied by the target vehicle in the video frame image captured by the first camera is largest compared to the other cameras in the at least one camera.
When the characters of the license plate number included in the license plate information of the target vehicle corresponding to at least one splicing coordinate are completely the same, it can be shown that at least one splicing coordinate is the splicing coordinate of the target vehicle. Therefore, the server can select one splicing coordinate from the at least one splicing coordinate as the current splicing coordinate of the target vehicle in the splicing coordinate system.
It should be noted that, the larger the size of the pixel area occupied by the target vehicle in the video frame image is, the larger the area of the license plate number of the target vehicle is, and the clearer the license plate number of the target vehicle is. Therefore, the server can select the stitching coordinate corresponding to the first camera as the current stitching coordinate of the target vehicle in the stitching coordinate system.
2. When the number of the at least one splicing coordinate is two, and characters of license plate numbers included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are not identical, the server determines the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate.
At least one camera may not shoot the license plate, the shot license plate is inclined seriously or the license plate is small. Under these circumstances, the characters of the license plate number included in the license plate information of the target vehicle corresponding to at least one of the stitching coordinates may not be identical, that is, the server cannot determine the current stitching coordinate of the target vehicle in the stitching coordinate system according to the characters of the license plate number. Therefore, the server can determine the matching degree between the license plate numbers in the license plate information of the target vehicle, and further determine the current splicing coordinate of the target vehicle in the splicing coordinate system according to the determined matching degree between the license plate numbers.
3. And when the determined matching degree is within the preset matching degree range, the server determines the distance between at least one splicing coordinate.
The server may preset a preset matching degree range, and when the determined matching degree is within the preset matching degree range, the server may further determine a distance between at least one of the stitching coordinates in order to further determine the current stitching coordinate of the target vehicle in the stitching coordinate system.
It should be noted that when the determined matching degree is outside the preset matching degree range, it is indicated that the matching degree between the license plate numbers in the license plate information of the target vehicle corresponding to at least one of the stitching coordinates is small, and at this time, it can be determined that at least one of the stitching coordinates is a stitching coordinate of two different vehicles, and then at least one of the stitching coordinates is stored.
4. And when the distance between at least one splicing coordinate is smaller than the preset distance, the server determines the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
When the distance between the at least one stitching coordinate is smaller than the preset distance, it can be stated that the at least one stitching coordinate is a stitching coordinate of the target vehicle. Therefore, the server may determine the stitching coordinate corresponding to the first camera as the current stitching coordinate of the target vehicle in the stitching coordinate system.
It should be noted that, when the distance between at least one of the stitching coordinates is greater than or equal to the preset distance, it may be stated that at least one of the stitching coordinates is a stitching coordinate of two different vehicles. At this time, the server may store at least one of the mosaic coordinates.
In addition, the erection conditions of two adjacent cameras can influence whether the two splicing coordinates of the target vehicle at the overlapped part are the same. If the erection conditions of the adjacent two cameras are completely consistent, the two stitching coordinates of the target vehicle at the overlapped portion are the same. If the erection conditions of the two adjacent cameras are not completely consistent, the two splicing coordinates of the target vehicle at the overlapping part are not completely the same, at this time, the server can determine the current splicing coordinate of the target vehicle in the splicing coordinate system according to the method in the step, namely according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera, and combine the two incompletely identical splicing coordinates, namely determine the current splicing coordinate of the target vehicle in the splicing coordinate system.
It should be noted that, when the current stitching coordinate of the target vehicle in the stitching coordinate system is different from the previous stitching coordinate, it is indicated that the target vehicle is still running, and therefore, after the previous stitching coordinate is connected with the current stitching coordinate, the process may still continue to return to step 301 to receive the vehicle information and the machine number of each camera. And when the current splicing coordinate of the target vehicle in the splicing coordinate system is the same as the previous splicing coordinate, indicating that the target vehicle stops running, and connecting the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system to obtain a complete track of the target vehicle.
Step 504: the server acquires at least one event occurrence process diagram of the target vehicle.
The at least one event occurrence process graph is obtained by shooting when any one of the plurality of cameras detects that the target vehicle has a vehicle entering event, a vehicle leaving event or a vehicle illegal parking event in a straight-line section of the road.
For any camera, when the target vehicle is in the monitoring range of the camera, the camera can track the position of the target vehicle, so that the track of the target vehicle is formed, and whether a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle stopping event occurs in the straight-line section of the road of the target vehicle is analyzed through the track. And when detecting that the target vehicle has a vehicle warehousing event, a vehicle ex-warehousing event or a vehicle illegal parking event, sending at least one event occurrence process diagram of the target vehicle to the server.
It should be noted that the camera may compare and analyze the trajectory of the target vehicle with the event rule and the preset rule line, so as to determine whether the target vehicle has a vehicle entering event, a vehicle leaving event, or a vehicle parking violation event. When determining that the target vehicle has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, shooting at least one event occurrence process diagram and sending the event occurrence process diagram to the server. When a vehicle storage event or a vehicle exit event occurs in the target vehicle, the preset rule line may be a parking space boundary line. When the target vehicle has a vehicle parking violation event, the preset rule line may be a boundary line of the parking violation area. The following embodiment of the present invention will be described by taking 3 event occurrence process diagrams captured by a camera as an example:
1. when the track of the target vehicle is detected to accord with the event rule of the vehicle warehousing event, the target vehicle is determined to have warehousing trend, and at the moment, a first event occurrence process diagram is shot. And continuously acquiring the track of the target vehicle, determining that the target vehicle is warehousing when the intersection point of the track of the target vehicle and the parking space boundary line is detected, and shooting a second event occurrence process diagram at the moment. And continuously acquiring the track of the target vehicle, when the track of the target vehicle is detected to continuously extend after passing through the intersection point of the boundary line of the parking space and the track is not changed after the first preset time, determining that the target vehicle is completely put in storage, and shooting a third event occurrence process diagram at the moment.
2. When the track of the target vehicle is detected to accord with the event rule of the vehicle ex-warehouse event, the target vehicle is determined to have the tendency of ex-warehouse, and at the moment, a first event occurrence process diagram is shot. And continuously acquiring the track of the target vehicle, determining that the target vehicle is leaving the garage when the intersection point of the track of the target vehicle and the parking space boundary line is detected, and shooting a second event occurrence process diagram at the moment. And continuously acquiring the track of the target vehicle, when the track of the target vehicle is detected to continuously extend after passing through the intersection point of the track and the parking space boundary line and extends to the image boundary, determining that the vehicle is completely taken out of the garage, and shooting a third event occurrence process diagram.
3. When it is detected that the track of the target vehicle has an intersection with one boundary line of the illegal parking area and the track of the target vehicle acquired within a second preset time period after the current time does not have an intersection with the boundary line of the opposite side of the illegal parking area, it is determined that the target vehicle has a tendency of illegal parking, and at the moment, a first event occurrence process diagram is shot. And continuously acquiring the track of the target vehicle, determining that the vehicle is in an illegal state when the track of the target vehicle is not changed after detecting a third preset time period, and shooting a second event occurrence process diagram at the moment. And continuously acquiring the track of the target vehicle, determining that the target vehicle is still in an illegal state when the track of the target vehicle is still unchanged after the fourth preset time length is detected, and shooting a third event occurrence process diagram at the moment.
In addition, it should be noted that, reference may be made to the prior art for a method for forming a track of a target vehicle by a camera, and details of an embodiment of the present invention are not described herein again.
Step 505: and the server determines an event evidence obtaining graph of the target vehicle according to the complete track and the at least one event occurrence process graph, and correspondingly stores the event evidence obtaining graph of the target vehicle and the license plate information of the target vehicle.
The server can determine a vehicle close-up image of the target vehicle according to the at least one event occurrence process image, determine a license plate recognition image and an overlapping area image of the target vehicle according to the complete track, and determine the at least one event occurrence process image, the vehicle close-up image, the license plate recognition image and the overlapping area image as event forensics images. The overlapping area map is a video frame image which is shot when the target vehicle is located at the overlapping part of the monitoring ranges of two adjacent cameras for the last time before the complete track is formed when the target vehicle enters the vehicle warehouse event, the vehicle leaves the vehicle warehouse event or the vehicle illegal parking event occurs.
It should be noted that, when determining the vehicle close-up view of the target vehicle, the server may select the first event occurrence process view from the at least one event occurrence process view, intercept the license plate expansion area from the first event occurrence process view, and determine the intercepted license plate expansion area as the vehicle close-up view of the target vehicle. The license plate expansion area refers to an area including the head or the tail of the target vehicle after expansion according to the license plate area, and the first event occurrence process diagram is an event occurrence process diagram with the largest size of a pixel area occupied by the target vehicle in at least one event occurrence process diagram.
The server can determine the license plate area from the first event occurrence process diagram and detect the characteristics of the target vehicle around the license plate area. And when the detected characteristics accord with the characteristics of the head of the target vehicle, determining the license plate area as the area of the license plate at the head of the target vehicle, intercepting the area comprising the license plate and the head, and then taking the area comprising the license plate and the head as the license plate expansion area. Or when the detected features accord with the features of the tail of the target vehicle, determining the license plate region as the region of the license plate at the tail of the target vehicle, intercepting the region including the license plate and the tail of the vehicle, and then taking the region including the license plate and the tail of the vehicle as the license plate expansion region.
It should be further noted that, when the server determines the license plate recognition map and the overlapping area map of the target vehicle according to the complete track, the server may first determine the number of cameras that can monitor the target vehicle for the last time before the complete track is formed when the target vehicle enters the garage event, exits the garage event, or the vehicle illegal parking event occurs. When the number of the cameras is one, a license plate recognition image and an overlapping area image are extracted from a video frame image shot by the camera which can monitor the target vehicle at the last time. When the number of the cameras is two, a license plate recognition image and an overlapping area image are extracted from a video frame image shot by any one camera which can monitor the target vehicle at the last time.
In addition, the server can combine at least one event occurrence process picture, a vehicle close-up picture, a license plate recognition picture and an overlapping region picture into one image, and the combined image is used as an event forensics picture. In one possible implementation manner, the server may perform scaling processing on the at least one event occurrence process diagram, combine the at least one event occurrence process diagram, the vehicle close-up diagram, the license plate recognition diagram, and the overlapping region diagram after the scaling processing into one image, and use the combined image as the event forensics diagram. In another possible implementation manner, the server may further perform scaling processing on at least one event occurrence process map, the vehicle close-up map, the license plate recognition map, and the overlap region map, combine the images subjected to the scaling processing into one image, and use the combined image as the event forensics map. The embodiment of the present invention is not limited thereto.
In the embodiment of the invention, because the monitoring ranges of two adjacent cameras in the multiple cameras are overlapped, and the vehicle information sent by each camera at least comprises the license plate information of the vehicle, when the vehicle information sent by at least one camera in the multiple cameras comprises the vehicle information of the target vehicle, the license plate number of the target vehicle can be determined by combining the vehicle information sent by the at least one camera, so that the problems that the license plate shot by a single camera is blocked, the license plate is seriously inclined or the license plate is smaller due to longer distance and cannot be recognized are solved. And determining the complete track of the target vehicle on the straight-line section of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera. And acquiring at least one event occurrence process diagram of the target vehicle, determining an event forensics diagram of the target vehicle according to the complete track and the at least one event occurrence process diagram, and correspondingly storing the event forensics diagram of the target vehicle and the license plate information of the target vehicle. That is, the vehicle information of the target vehicle sent by all the cameras capable of monitoring the target vehicle is combined to form a complete track of the target vehicle on a straight-line section of the road, and then an event evidence obtaining map of the target vehicle is determined. Therefore, when the target vehicle is managed, the occurrence of a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle stopping violation event of the target vehicle can be confirmed according to the event evidence obtaining graph, the license plate information of the target vehicle can be clearly known, and the effectiveness of managing the target vehicle is greatly improved.
An embodiment of the present invention provides an apparatus for vehicle management, and referring to fig. 10, the apparatus includes a receiving module 1001, a determining module 1002, an obtaining module 1003, and a storing module 1004.
The receiving module 1001 is configured to receive vehicle information and a machine number of each of the multiple cameras, where the vehicle information at least includes license plate information, the multiple cameras are disposed on a straight line portion of a road along a direction of the road, monitoring ranges of two adjacent cameras in the multiple cameras have an overlapping portion, and each camera is provided with a unique machine number.
The determining module 1002 is configured to, when the vehicle information sent by at least one of the multiple cameras includes vehicle information of a target vehicle, determine a complete track of the target vehicle on the straight-line segment of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and a machine number of the at least one camera, where the target vehicle is any vehicle currently performing vehicle management.
The obtaining module 1003 is configured to obtain at least one event occurrence process map of the target vehicle, where the at least one event occurrence process map is obtained by shooting, by any one of the plurality of cameras, when it is detected that the target vehicle has a vehicle entering event, a vehicle leaving event, or a vehicle parking violation event in the straight-line section of the road.
The storage module 1004 determines an event forensic image of the target vehicle according to the complete trajectory and the at least one event occurrence process map, and correspondingly stores the event forensic image of the target vehicle and license plate information of the target vehicle.
In one possible implementation manner, the vehicle information of the target vehicle further includes calibration coordinates of the target vehicle in a calibration coordinate system, and the determining module 1002 includes:
the conversion submodule is used for converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into at least one corresponding splicing coordinate in a splicing coordinate system according to the machine number of the at least one camera, and the splicing coordinate system is a coordinate system used for drawing the complete track;
the first determining submodule is used for determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera;
and the connecting sub-module is used for connecting a line between the current splicing coordinate and the current splicing coordinate in the splicing coordinate system when the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system are different, returning to the step of receiving the vehicle information and the machine number of the target vehicle sent by each camera in the plurality of cameras until the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system are the same, and connecting the line between the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system to obtain the complete track of the target vehicle.
In one possible implementation, the machine numbers of the plurality of cameras are increased one by one starting from 0;
the conversion sub-module is further configured to use an abscissa of a calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera as an abscissa of the target vehicle in the stitching coordinate system, and add an ordinate of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera correspondingly to obtain an ordinate of the target vehicle in the stitching coordinate system.
In one possible implementation, the first determining sub-module includes:
the first determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the number of the at least one splicing coordinate is two and the characters of the license plate number included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are completely the same, and compared with other cameras in the at least one camera, the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
the second determining unit is used for determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate when the number of the at least one splicing coordinate is two and the characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are not identical;
the third determining unit is used for determining the distance between at least one splicing coordinate when the determined matching degree is within the preset matching degree range;
and the fourth determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the distance between at least one splicing coordinate is smaller than the preset distance.
In one possible implementation, the determining module 1002 further includes:
the receiving submodule is used for receiving a reference image shot by each camera in the plurality of cameras and a machine number of the receiving submodule, each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and the establishing submodule is used for establishing the splicing coordinate system according to the machine numbers of the cameras, and the two calibration lines and at least two lane lines included in each of the received multiple reference images.
In one possible implementation, the establishing submodule includes:
the connecting unit is used for superposing the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the machine numbers of the cameras and connecting the leftmost lane lines in the reference images to obtain a vertical connecting line;
a fifth determining unit, configured to determine a direction from small to large of the machine numbers of the plurality of cameras as a direction of the vertical connecting line;
a sixth determining unit configured to acquire a lowermost calibration line in the reference image captured by the camera having the smallest machine number, and determine a horizontal rightward direction as a direction of the lowermost calibration line;
and the establishing unit is used for taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
In one possible implementation, the storage module 1004 includes:
the selecting submodule is used for selecting a first event occurrence process diagram from the at least one event occurrence process diagram, and the first event occurrence process diagram is an event occurrence process diagram with the largest pixel area size occupied by the target vehicle in the at least one event occurrence process diagram;
the intercepting submodule is used for intercepting a license plate expansion area from the first event occurrence process picture and determining the intercepted license plate expansion area as a vehicle sketch picture of the target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
the acquisition submodule is used for acquiring a license plate identification map and an overlapped area map of a target vehicle, wherein the license plate identification map refers to an image which can identify the license plate number for the last time before a complete track is formed when the target vehicle has a vehicle warehousing event, a vehicle ex-warehousing event or a vehicle illegal parking event, and the overlapped area map refers to a video frame image which is shot when the target vehicle is located in the overlapped part of the monitoring ranges of two adjacent cameras for the last time before the complete track when the target vehicle has the vehicle warehousing event, the vehicle ex-warehousing event or the vehicle illegal parking event;
and the second determining submodule is used for determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map and the overlapping area map as the event forensics map.
In one possible implementation, the obtaining sub-module includes:
a seventh determining unit for determining the number of cameras that can monitor the target vehicle the last time before the complete trajectory is formed;
a first extracting unit for extracting the license plate recognition map and the overlap region map from the video frame image captured by the camera capable of monitoring the target vehicle at the last time when the number of the cameras is one;
and a second extraction unit configured to extract the license plate recognition map and the overlap area map from a video frame image captured by any one of the cameras that can monitor the target vehicle at the last time, when the number of the cameras is two.
In one possible implementation, the second determining sub-module includes:
and the synthesis unit is used for synthesizing the at least one event occurrence process picture, the vehicle close-up picture, the license plate recognition picture and the overlapping area picture into one image and taking the synthesized image as the event evidence obtaining picture.
In the embodiment of the invention, because the monitoring ranges of two adjacent cameras in the multiple cameras are overlapped, and the vehicle information sent by each camera at least comprises the license plate information of the vehicle, when the vehicle information sent by at least one camera in the multiple cameras comprises the vehicle information of the target vehicle, the license plate number of the target vehicle can be determined by combining the vehicle information sent by the at least one camera, so that the problems that the license plate shot by a single camera is blocked, the license plate is seriously inclined or the license plate is smaller due to longer distance and cannot be recognized are solved. And determining the complete track of the target vehicle on the straight-line section of the road according to the vehicle information of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera. And acquiring at least one event occurrence process diagram of the target vehicle, determining an event forensics diagram of the target vehicle according to the complete track and the at least one event occurrence process diagram, and correspondingly storing the event forensics diagram of the target vehicle and the license plate information of the target vehicle. That is, the vehicle information of the target vehicle sent by all the cameras capable of monitoring the target vehicle is combined to form a complete track of the target vehicle on a straight-line section of the road, and then an event evidence obtaining map of the target vehicle is determined. Therefore, when the target vehicle is managed, the occurrence of a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle stopping violation event of the target vehicle can be confirmed according to the event evidence obtaining graph, the license plate information of the target vehicle can be clearly known, and the effectiveness of managing the target vehicle is greatly improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 11 is a schematic structural diagram of a server according to an embodiment of the present invention; the server 1100 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and memory 1132, one or more storage media 1130 (e.g., one or more mass storage devices) storing applications 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1122 may be provided in communication with the storage medium 1130 to execute a series of instruction operations in the storage medium 1130 on the server 1100.
The server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, one or more keyboards 1156, and/or one or more operating systems 1141, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The server 1100 may be used to perform the steps performed by the server in the vehicle management method provided in the above-described embodiment.
The embodiment of the present invention also provides a computer-readable storage medium, which is applied to a terminal, and the computer-readable storage medium stores at least one instruction, at least one program, a code set, or a set of instructions, where the instruction, the program, the code set, or the set of instructions are loaded and executed by a processor to implement the operations performed by the server in the vehicle management method according to the above embodiment.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. The invention
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof.

Claims (16)

1. A method of vehicle management, the method comprising:
receiving vehicle information and a machine number of each camera in a plurality of cameras, wherein the vehicle information at least comprises license plate information, the cameras are arranged on a straight line section of a road along the direction of the road, the monitoring ranges of two adjacent cameras in the cameras are overlapped, and each camera is provided with a unique machine number;
when the vehicle information sent by at least one camera in the multiple cameras comprises the vehicle information of a target vehicle, converting the calibration coordinates of the target vehicle in a calibration coordinate system, which are included in the vehicle information sent by the at least one camera, into corresponding at least one splicing coordinate in a splicing coordinate system according to the machine number of the at least one camera, and determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera;
when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting the previous splicing coordinate in the splicing coordinate system with the current splicing coordinate, returning to the step of receiving vehicle information and the machine number of each camera in the plurality of cameras until the current splicing coordinate of the target vehicle in the splicing coordinate system is the same as the previous splicing coordinate, and connecting the current splicing coordinate of the target vehicle in the splicing coordinate system with the previous splicing coordinate to obtain a complete track of the target vehicle on the straight-line section part of the road, wherein the target vehicle is any vehicle currently subjected to vehicle management, and the splicing coordinate system is a coordinate system used for drawing the complete track;
acquiring at least one event occurrence process diagram of the target vehicle, wherein the at least one event occurrence process diagram is obtained by shooting when any one of the plurality of cameras detects that the target vehicle has a vehicle storage event, a vehicle delivery event or a vehicle illegal parking event in the straight-line section of the road; determining an event forensics graph of the target vehicle according to the complete track and the at least one event occurrence process graph, and correspondingly storing the event forensics graph of the target vehicle and the license plate information of the target vehicle;
the machine numbers of the plurality of cameras are increased one by one from 0; the converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into corresponding at least one stitching coordinate in a stitching coordinate system according to the machine number of the at least one camera includes:
and taking the abscissa of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera as the abscissa of the target vehicle in the splicing coordinate system, and correspondingly adding the ordinate of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera to obtain the ordinate of the target vehicle in the splicing coordinate system.
2. The method of claim 1, wherein the determining the current stitching coordinate of the target vehicle in the stitching coordinate system according to the license plate information of the target vehicle and the at least one stitching coordinate in the vehicle information sent by the at least one camera comprises:
when the number of the at least one splicing coordinate is two, and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are completely the same, determining a splicing coordinate corresponding to a first camera as a current splicing coordinate of the target vehicle in the splicing coordinate system, and comparing with other cameras in the at least one camera, wherein the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
when the number of the at least one splicing coordinate is two, and characters of license plate numbers included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are not identical, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate;
when the determined matching degree is within a preset matching degree range, determining the distance between the at least one splicing coordinate;
and when the distance between the at least one splicing coordinate is smaller than a preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
3. The method of claim 1, wherein before converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera to the corresponding at least one stitching coordinate in the stitching coordinate system according to the machine number of the at least one camera, the method further comprises:
receiving a reference image shot by each camera in the plurality of cameras and a machine number of the camera, wherein each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and the reference images shot by two adjacent cameras have the same calibration line;
and establishing the splicing coordinate system according to the machine numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the received multiple reference images.
4. The method of claim 3, wherein the establishing the stitching coordinate system according to the machine numbers of the plurality of cameras and the two calibration lines and the at least two lane lines included in each of the received plurality of reference images comprises:
according to the machine numbers of the cameras, the same calibration lines in the reference images shot by two adjacent cameras in the cameras are overlapped, and the leftmost lane lines in the reference images are connected to obtain a vertical connecting line;
determining the direction of the machine numbers of the plurality of cameras from small to large as the direction of the vertical connecting line;
acquiring a bottommost calibration line in a reference image shot by a camera with the smallest machine number, and determining the horizontal right direction as the direction of the bottommost calibration line;
and taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
5. The method of claim 1, wherein said determining an event forensic map for the target vehicle based on the complete trajectory and the at least one event occurrence process map comprises:
selecting a first event occurrence process diagram from the at least one event occurrence process diagram, wherein the first event occurrence process diagram is an event occurrence process diagram with the largest size of a pixel area occupied by the target vehicle in the at least one event occurrence process diagram;
intercepting a license plate expansion area from the first event occurrence process image, and determining the intercepted license plate expansion area as a vehicle sketch image of the target vehicle, wherein the license plate expansion area is an area which comprises the head or tail of the target vehicle after being expanded according to the license plate area;
acquiring a license plate recognition image and an overlapped area image of the target vehicle, wherein the license plate recognition image is an image which can recognize the license plate number for the last time before the complete track is formed when the target vehicle has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, and the overlapped area image is a video frame image which is shot when the target vehicle is positioned in the overlapped part of the monitoring ranges of two adjacent cameras for the last time before the complete track when the target vehicle has the vehicle warehousing event, the vehicle ex-warehouse event or the vehicle illegal parking event;
and determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map and the overlapping area map as the event forensics map.
6. The method of claim 5, wherein the obtaining the license plate recognition map and the overlap area map of the target vehicle comprises:
determining the number of cameras that can monitor the target vehicle the last time before the complete trajectory is formed;
when the number of the cameras is one, extracting the license plate recognition image and the overlapping area image from the video frame image shot by the camera which can monitor the target vehicle at the last time;
and when the number of the cameras is two, extracting the license plate recognition image and the overlapping area image from a video frame image shot by any one camera which can monitor the target vehicle at the last time.
7. The method of claim 5, wherein said determining the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map, and the overlap area map as the event forensics map comprises:
and combining the at least one event occurrence process graph, the vehicle close-up graph, the license plate recognition graph and the overlapping area graph into one image, and taking the combined image as the event evidence obtaining graph.
8. An apparatus for vehicle management, the apparatus comprising:
the system comprises a receiving module, a processing module and a display module, wherein the receiving module is used for receiving vehicle information and a machine number of each camera in a plurality of cameras, the vehicle information at least comprises license plate information, the cameras are arranged on a straight line section part of a road along the trend of the road, the monitoring ranges of two adjacent cameras in the cameras are overlapped, and each camera is provided with a unique machine number;
the determining module is used for determining a complete track of the target vehicle on the straight line section of the road according to the vehicle information of the target vehicle and the machine number of at least one camera in the vehicle information sent by the at least one camera when the vehicle information sent by the at least one camera in the plurality of cameras comprises the vehicle information of the target vehicle, wherein the target vehicle is any one vehicle which is currently subjected to vehicle management;
the acquisition module is used for acquiring at least one event occurrence process chart of the target vehicle, wherein the at least one event occurrence process chart is obtained by shooting when any one camera in the plurality of cameras detects that the target vehicle has a vehicle warehousing event, a vehicle ex-warehousing event or a vehicle illegal parking event in the straight-line section of the road;
the storage module is used for determining an event forensics graph of the target vehicle according to the complete track and the at least one event occurrence process graph and correspondingly storing the event forensics graph of the target vehicle and the license plate information of the target vehicle;
the vehicle information of the target vehicle further includes calibration coordinates of the target vehicle in a calibration coordinate system, and the determining module includes:
the conversion submodule is used for converting the calibration coordinates of the target vehicle in the vehicle information sent by the at least one camera into at least one corresponding splicing coordinate in a splicing coordinate system according to the machine number of the at least one camera, wherein the splicing coordinate system is a coordinate system used for drawing the complete track;
the first determining submodule is used for determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the vehicle information sent by the at least one camera;
a connection sub-module, configured to, when a current stitching coordinate of the target vehicle in the stitching coordinate system is different from a previous stitching coordinate, connect a line between the previous stitching coordinate and the current stitching coordinate in the stitching coordinate system, and return to the step of receiving the vehicle information and the machine number of the target vehicle sent by each of the multiple cameras until the current stitching coordinate of the target vehicle in the stitching coordinate system is the same as the previous stitching coordinate, connect the line between the current stitching coordinate and the previous stitching coordinate of the target vehicle in the stitching coordinate system, and then obtain the complete trajectory of the target vehicle;
the machine numbers of the plurality of cameras are increased one by one from 0; the conversion sub-module is further configured to use an abscissa of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera as an abscissa of the target vehicle in the stitching coordinate system, and add a ordinate of the calibration coordinate of the target vehicle in the vehicle information sent by the at least one camera and the machine number of the at least one camera correspondingly to obtain an ordinate of the target vehicle in the stitching coordinate system.
9. The apparatus of claim 8, wherein the first determination submodule comprises:
a first determining unit, configured to determine, when the number of the at least one stitching coordinate is two and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one stitching coordinate are completely the same, a stitching coordinate corresponding to a first camera as a current stitching coordinate of the target vehicle in the stitching coordinate system, where a size of a pixel area occupied by the target vehicle in a video frame image captured by the first camera is the largest compared with other cameras in the at least one camera;
the second determining unit is used for determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate when the number of the at least one splicing coordinate is two and the characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are not identical;
the third determining unit is used for determining the distance between the at least one splicing coordinate when the determined matching degree is within a preset matching degree range;
and the fourth determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the distance between the at least one splicing coordinate is smaller than the preset distance.
10. The apparatus of claim 8, wherein the determining module further comprises:
the receiving submodule is used for receiving a reference image shot by each camera in the plurality of cameras and a machine number of the receiving submodule, each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and the establishing submodule is used for establishing the splicing coordinate system according to the machine numbers of the cameras, and the two calibration lines and at least two lane lines included in each of the received multiple reference images.
11. The apparatus of claim 10, wherein the establishing sub-module comprises:
the connecting unit is used for superposing the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the machine numbers of the cameras, and connecting the leftmost lane lines in the reference images to obtain a vertical connecting line;
a fifth determining unit, configured to determine a direction in which the machine numbers of the plurality of cameras are from small to large as a direction of the vertical connecting line;
a sixth determining unit configured to acquire a lowermost calibration line in a reference image captured by a camera having a smallest machine number, and determine a horizontal rightward direction as a direction of the lowermost calibration line;
and the establishing unit is used for taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
12. The apparatus of claim 8, wherein the storage module comprises:
the selecting submodule is used for selecting a first event occurrence process diagram from the at least one event occurrence process diagram, and the first event occurrence process diagram is an event occurrence process diagram with the largest pixel area occupied by the target vehicle in the at least one event occurrence process diagram;
the intercepting submodule is used for intercepting a license plate expansion area from the first event occurrence process image and determining the intercepted license plate expansion area as a vehicle sketch image of the target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
the acquisition sub-module is used for acquiring a license plate identification map and an overlapped area map of the target vehicle, wherein the license plate identification map is an image which can identify the license plate number for the last time before the complete track is formed when the target vehicle has a vehicle warehousing event, a vehicle ex-warehouse event or a vehicle illegal parking event, and the overlapped area map is a video frame image which is shot when the target vehicle has the overlapping part of the monitoring ranges of two adjacent cameras for the last time before the complete track when the target vehicle has the vehicle warehousing event, the vehicle ex-warehouse event or the vehicle illegal parking event;
a second determining submodule, configured to determine the at least one event occurrence process map, the vehicle close-up map, the license plate recognition map, and the overlap area map as the event forensics map.
13. The apparatus of claim 12, wherein the acquisition submodule comprises:
a seventh determining unit, configured to determine the number of cameras that can monitor the target vehicle the last time before the complete trajectory is formed;
a first extraction unit, configured to extract the license plate recognition map and the overlap area map from the video frame image captured by the camera capable of monitoring the target vehicle for the last time when the number of cameras is one;
a second extraction unit, configured to extract the license plate recognition map and the overlap area map from a video frame image captured by any one of the cameras that can monitor the target vehicle at the last time when the number of the cameras is two.
14. The apparatus of claim 12, wherein the second determination submodule comprises:
and the synthesis unit is used for synthesizing the at least one event occurrence process graph, the vehicle close-up graph, the license plate recognition graph and the overlapping region graph into one image and taking the synthesized image as the event evidence obtaining graph.
15. An apparatus for vehicle management, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1-7.
16. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the method of any of claims 1-7.
CN201910059669.0A 2019-01-22 2019-01-22 Method, device and computer readable storage medium for vehicle management Active CN111462502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910059669.0A CN111462502B (en) 2019-01-22 2019-01-22 Method, device and computer readable storage medium for vehicle management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910059669.0A CN111462502B (en) 2019-01-22 2019-01-22 Method, device and computer readable storage medium for vehicle management

Publications (2)

Publication Number Publication Date
CN111462502A CN111462502A (en) 2020-07-28
CN111462502B true CN111462502B (en) 2021-06-08

Family

ID=71683073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910059669.0A Active CN111462502B (en) 2019-01-22 2019-01-22 Method, device and computer readable storage medium for vehicle management

Country Status (1)

Country Link
CN (1) CN111462502B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112002128A (en) * 2020-09-03 2020-11-27 东来智慧交通科技(深圳)有限公司 Vehicle trajectory tracking method and vehicle trajectory tracking system
CN113436436A (en) * 2021-05-20 2021-09-24 鄂尔多斯市龙腾捷通科技有限公司 Intelligent AI system for traffic violation data processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777263A (en) * 2010-02-08 2010-07-14 长安大学 Traffic vehicle flow detection method based on video
CN107134145A (en) * 2017-06-10 2017-09-05 智慧互通科技有限公司 Roadside Parking managing device, system and method based on polymorphic type IMAQ
CN107507298A (en) * 2017-08-11 2017-12-22 南京阿尔特交通科技有限公司 A kind of multimachine digital video vehicle operation data acquisition method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9171382B2 (en) * 2012-08-06 2015-10-27 Cloudparc, Inc. Tracking speeding violations and controlling use of parking spaces using cameras
US8830322B2 (en) * 2012-08-06 2014-09-09 Cloudparc, Inc. Controlling use of a single multi-vehicle parking space and a restricted location within the single multi-vehicle parking space using multiple cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777263A (en) * 2010-02-08 2010-07-14 长安大学 Traffic vehicle flow detection method based on video
CN107134145A (en) * 2017-06-10 2017-09-05 智慧互通科技有限公司 Roadside Parking managing device, system and method based on polymorphic type IMAQ
CN107507298A (en) * 2017-08-11 2017-12-22 南京阿尔特交通科技有限公司 A kind of multimachine digital video vehicle operation data acquisition method and device

Also Published As

Publication number Publication date
CN111462502A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN104685513B (en) According to the high-resolution estimation of the feature based of the low-resolution image caught using array source
CN111462503B (en) Vehicle speed measuring method and device and computer readable storage medium
US20160210507A1 (en) Image processing system with layout analysis and method of operation thereof
US10867166B2 (en) Image processing apparatus, image processing system, and image processing method
US11132538B2 (en) Image processing apparatus, image processing system, and image processing method
CN104966063A (en) Mine multi-camera video fusion method based on GPU and CPU cooperative computing
WO2020233397A1 (en) Method and apparatus for detecting target in video, and computing device and storage medium
US9747507B2 (en) Ground plane detection
CN111462502B (en) Method, device and computer readable storage medium for vehicle management
CN104392416A (en) Video stitching method for sports scene
CN110008771B (en) Code scanning system and code scanning method
US20200302155A1 (en) Face detection and recognition method using light field camera system
Wanchaitanawong et al. Multi-modal pedestrian detection with large misalignment based on modal-wise regression and multi-modal IoU
Li et al. Panoramic image mosaic technology based on sift algorithm in power monitoring
CN103733225A (en) Characteristic point coordination system, characteristic point coordination method, and recording medium
US9727780B2 (en) Pedestrian detecting system
Thomas et al. Recent advances towards a robust, automated hurricane damage assessment from high-resolution images
KR101717140B1 (en) A system and method for detecting vehicles and recognizing number plates on multi-lane using one camera
CN111932590B (en) Object tracking method and device, electronic equipment and readable storage medium
US8433139B2 (en) Image processing apparatus, image processing method and program for segmentation based on a degree of dispersion of pixels with a same characteristic quality
CN109726684B (en) Landmark element acquisition method and landmark element acquisition system
CN111583341A (en) Pan-tilt camera displacement detection method
Wang Distributed multi-object tracking with multi-camera systems composed of overlapping and non-overlapping cameras
Zhou et al. Static object tracking in road panoramic videos
JP2006012013A (en) Mobile object tracking device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant