CN111951598B - Vehicle tracking monitoring method, device and system - Google Patents
Vehicle tracking monitoring method, device and system Download PDFInfo
- Publication number
- CN111951598B CN111951598B CN201910412661.8A CN201910412661A CN111951598B CN 111951598 B CN111951598 B CN 111951598B CN 201910412661 A CN201910412661 A CN 201910412661A CN 111951598 B CN111951598 B CN 111951598B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- position information
- camera
- coordinate system
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/14—Traffic control systems for road vehicles indicating individual free spaces in parking areas
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The application provides a vehicle tracking monitoring method, device and system. The method comprises the following steps: receiving vehicle information sent by cameras arranged in a designated area, wherein the vehicle information sent by each camera at least comprises: vehicle position information and first information for identifying the same vehicle; determining coordinate position information corresponding to the position information of each vehicle in the created splicing coordinate system; and identifying target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information, and determining the movement track of the target vehicle in the designated area according to the target coordinate position information. Therefore, the complete movement track of the target vehicle in the designated area can be obtained, and the vehicle can be monitored more effectively.
Description
Technical Field
The application relates to the technical field of video monitoring, in particular to a vehicle tracking monitoring method, device and system.
Background
At present, with the continuous rising of the automobile holding amount, the problems of difficult parking and difficult car finding are increasingly prominent, and the development of urban economy is also limited; at present, in order to solve this problem, the number of newly built parking lots is increasing, and one of them is an open field parking lot.
In the related art, when monitoring a parking lot, a monitoring mode often adopted is to set a monitoring camera for monitoring a certain number of parking spaces, such as 3-4 parking spaces, independently; the monitoring camera can only acquire the picture of the vehicle in the field of view of the monitoring camera, and the complete moving track of the vehicle in the whole parking lot cannot be effectively monitored due to the fact that the area of the open-air wide-field parking lot is large.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus and a system for tracking and monitoring a vehicle, so as to facilitate effective monitoring of the vehicle.
Specifically, the method is realized through the following technical scheme:
in a first aspect, an embodiment of the present application provides a vehicle tracking monitoring method, where the method is applied to a server, and includes:
receiving vehicle information sent by cameras arranged in a designated area, wherein the vehicle information sent by each camera at least comprises: vehicle position information;
determining coordinate position information corresponding to the position information of each vehicle in the created splicing coordinate system;
and identifying target coordinate position information belonging to the same target vehicle from the coordinate position information, and determining the movement track of the target vehicle in the specified area according to the target coordinate position information.
In a second aspect, an embodiment of the present application provides a vehicle tracking monitoring device, including:
the receiving module is used for receiving vehicle information sent by each camera arranged in the designated area, and the vehicle information sent by each camera at least comprises: vehicle position information and first information for identifying the same vehicle;
the determining module is used for determining coordinate position information corresponding to the position information of each vehicle in the created splicing coordinate system;
and the identification module is used for identifying target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information and determining the moving track of the target vehicle in the designated area according to the target coordinate position information.
In a third aspect, embodiments of the present application provide a vehicle monitoring system, where the system includes a server and a plurality of monitoring cameras, where the server performs vehicle tracking monitoring by applying the method according to the first aspect;
the monitoring cameras are respectively used for acquiring video images of different areas of a specified area and identifying a target vehicle from the video images;
determining vehicle position information of the target vehicle in a calibration coordinate system configured by the camera according to the corresponding relation between the position of the target vehicle in the video image and a pixel coordinate system of the video image and the calibration coordinate system of the camera; transmitting vehicle information including at least the vehicle position information to the server;
the server is used for receiving vehicle information sent by cameras arranged in a designated area, and the vehicle information sent by each camera at least comprises: vehicle position information; determining coordinate position information corresponding to the position information of each vehicle in the created splicing coordinate system; and identifying target coordinate position information belonging to the same target vehicle from the coordinate position information, and determining the movement track of the target vehicle in the specified area according to the target coordinate position information.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method according to the first aspect when executing the program.
According to the vehicle tracking monitoring method, the device and the system, different area ranges of a designated area can be monitored through a plurality of cameras, vehicle position information sent by each camera is received, and coordinate position information corresponding to the vehicle position information is determined in a created splicing coordinate system; and identifying target coordinate position information belonging to the same target vehicle from the determined coordinate position information, and then determining the complete movement track of the target vehicle in the specified area according to the target coordinate position information. And further, the vehicle can be effectively monitored in the designated area.
Drawings
FIG. 1 is a schematic view of a vehicle tracking monitoring method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a vehicle tracking monitoring method according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a calibration coordinate system shown in an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a parking lot shown in an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a vehicle tracking monitoring device according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a parking space camera calibration coordinate system shown in an exemplary embodiment of the present application;
FIG. 7 is a schematic illustration of an imaging shown in an exemplary embodiment of the present application;
fig. 8 is a schematic diagram illustrating coordinate transformation between an entrance camera and a parking space camera according to an exemplary embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the prior art, for a large square parking lot, when monitoring a vehicle, it is difficult to monitor a complete moving track of a target vehicle in the parking lot by using one camera. Based on the above, the embodiment of the application provides a vehicle tracking monitoring method, device and system.
FIG. 1 illustrates a schematic diagram of a scenario of a vehicle tracking monitoring method provided in an embodiment of the present application; referring to fig. 1, a plurality of cameras 20 are arranged in a monitored designated area, where the designated area may be a complete large parking lot (or an independent area in a parking lot), the plurality of cameras are respectively used for collecting video images of different areas of the parking lot, the monitored area of the plurality of cameras covers the entire parking lot, each camera performs vehicle identification on the collected video images and obtains vehicle position information of a vehicle, each camera monitoring the vehicle uploads the vehicle position information of the vehicle to a server in the process that the vehicle runs through the monitored area of the plurality of cameras, and the server obtains a complete moving track of the vehicle in the parking lot according to the received vehicle position information of the same vehicle sent by each camera. The server 10 and the plurality of cameras 20 may be connected to each other via a network or the like.
Referring to fig. 2, an embodiment of the present application provides a tracking and monitoring method for a vehicle, which is applied to the server, and the server is simultaneously connected to a plurality of cameras in a communication manner, and the method includes the following steps S10-S30:
step S10, receiving vehicle information sent by each camera set in the designated area, where the vehicle information sent by each camera at least includes: vehicle position information and first information for identifying the same vehicle.
In this embodiment, the camera includes: an entrance camera and a parking space camera. A plurality of cameras are arranged in a designated area at the same time for video monitoring, each camera carries out vehicle identification from a collected video image, vehicle position information of a vehicle is obtained through calculation, and the vehicle position information is uploaded to a server.
The first information for identifying the same vehicle includes: identifying a license plate; or comprises the following steps: the point in time and the vehicle identification assigned by the camera to the vehicle.
Optionally, the vehicle position information includes: calibrating coordinates of the vehicle; each camera calculates pixel coordinate data of the vehicle in an image coordinate system, converts the pixel coordinate data of the vehicle into calibration coordinates under a calibration coordinate system of the camera, and uploads the calibration coordinates of the vehicle to a server in real time.
Optionally, the vehicle position information may include: and after each camera calculates pixel coordinate data of the vehicle in an image coordinate system, the pixel coordinate data are uploaded to a server, and the pixel coordinate data can be converted into calibration coordinates in a calibration coordinate system by the server according to the calibration coordinate system corresponding to the camera.
The pixel coordinate data may be pixel coordinate data of a certain key point of the vehicle, for example, a pixel coordinate rectangular frame is used to represent the vehicle, the key point may be a center point of the rectangular frame, a middle point of a lower boundary of the rectangular frame, or the key point may also be a center point of a license plate, and the like, and the pixel coordinate of the key point is subjected to coordinate conversion to obtain a calibration coordinate of the key point in a calibration coordinate system of the camera, which is used as the calibration coordinate of the vehicle.
And S20, determining coordinate position information corresponding to the vehicle position information in the created splicing coordinate system.
And S30, identifying target coordinate position information belonging to the same target vehicle from the coordinate position information, and determining the movement track of the target vehicle in the designated area according to the target coordinate position information.
In an embodiment of the present application, the vehicle information sent by each camera further includes: a camera identification;
in this embodiment, in the step S20, the determining the coordinate position information corresponding to the vehicle position information in the created stitching coordinate system specifically includes the following steps S21-S22:
and S21, determining a corresponding area range in the splicing coordinate system according to the camera identification carried by the vehicle information aiming at the vehicle information sent by each camera.
In this embodiment, the server stores a stitching coordinate system in advance, and the stitching coordinate system is formed by the region blocks corresponding to the cameras respectively.
And S22, according to the position information of the area range in the splicing coordinate system, determining the coordinate position information corresponding to the vehicle position information in the vehicle information.
In an embodiment of the present application, a schematic diagram of the stitching coordinate system may be shown in fig. 3, where the stitching coordinate system includes a plurality of area blocks, each area block (a square in the drawing) represents a calibration coordinate system of one camera, the stitching coordinate system is composed of calibration coordinate systems of the cameras, the calibration coordinate system of each camera corresponds to an identifier of the camera, and then the calibration coordinate system of the camera may be determined from the stitching coordinate system after the camera identifier of the camera is obtained. In this embodiment, the coordinates of the vertex at the top right corner of the stitching coordinate system composed of the calibration coordinate systems of M rows and N columns are (N, M), and the (N, M) is the coordinates in the stitching coordinate system.
In this embodiment, in the step S22, the determining, according to the position information of the area range in the stitching coordinate system, the coordinate position information corresponding to the vehicle position information in the vehicle information includes the following steps:
s221, aiming at the vehicle information sent by each camera, finding a target calibration coordinate system matched with the camera identification carried by the vehicle information in the calibration coordinate systems of the cameras stored locally, and determining a target area block where the target calibration coordinate system is located as the area range.
In this embodiment, a target calibration coordinate system corresponding to the camera identifier is first determined in the stitching coordinate system, a target region block in which the target calibration coordinate system is located is used as the region range, and further, position information of the region range in the stitching coordinate system is obtained.
S222, according to the position information of the area range in the splicing coordinate system and the calibration coordinates of the vehicle, determining coordinate position information corresponding to the vehicle position information in the splicing coordinate system.
In the stitching coordinate system, the position of the calibration coordinate system of each camera is determined, again as shown in fig. 3, where the calibration coordinate system of the point a is located at the ith row and jth column in the stitching coordinate system.
In this embodiment, the vehicle position information is a calibration coordinate, and after the position of the target calibration coordinate system corresponding to the camera identifier in the stitching coordinate system is determined, the coordinate position information in the stitching coordinate system is determined according to the coordinate position information of the specified point in the target calibration coordinate system and the vehicle position information in the vehicle information.
The designated point may be the origin of coordinates of the target calibration coordinate system, or may be another coordinate point, such as a point with a calibration coordinate of (1, 1) in the target calibration coordinate system.
Taking the above-mentioned designated point as the coordinate origin of the target calibration coordinate system as an example, referring to the stitching coordinate system shown in fig. 3 again, if the coordinates of the target point a in the calibration coordinate system located in the ith row and j column in the stitching coordinate system are to be calculated, the calculation manner is as follows:
setting the calibration coordinate of the target point A in the calibration coordinate system as (x, y), and the coordinate of the origin of the calibration coordinate system in the splicing coordinate system as (j-1, i-1), so that the coordinate of the target point A in the splicing coordinate system can be obtained only by converting on the basis of the coordinate (j-1, i-1) of the origin of the calibration coordinate system; specifically, the calibration coordinates (x, y) of the target point A and the coordinates (j-1, i-1) of the origin of the calibration coordinate system in the splicing coordinate system are added to obtain the coordinates (j-1+ x, i-1+ y) of the target point A in the splicing coordinate system. And further completing the conversion of the calibration coordinates of the target point A in the calibration coordinate system to obtain the coordinates in the splicing coordinate system. In this embodiment, the coordinates of the calibration coordinates of the vehicle in the stitching coordinate system are calculated according to the method.
In an embodiment of the present application, the vehicle information sent by each camera further includes a license plate identifier corresponding to the vehicle position information.
Further, in the present embodiment, the target coordinate position information belonging to the same target vehicle can be identified from the respective coordinate position information by the following steps B10-B20:
and step B10, determining the position information of each vehicle corresponding to the same license plate identifier.
Aiming at the situation that each camera can clearly shoot the license plate of the vehicle, the camera is arranged to identify the license plate number of the vehicle from the collected video image, the license plate number is used as the license plate identification of the vehicle, when the camera sends the vehicle information of the vehicle to the server, the vehicle information comprises the license plate identification of the vehicle, and then the server can obtain the vehicle position information corresponding to the license plate identification from the position information of each vehicle sent by different cameras according to the same license plate identification.
And step B20, determining the coordinate position information of each piece of vehicle position information in the splicing coordinate system as the target coordinate position information.
After the server determines the vehicle position information corresponding to the same license plate identifier, the coordinate position information of the vehicle position information in the splicing coordinate system is determined as the coordinate position information of the vehicle to which the license plate identifier belongs.
Furthermore, in the method provided in this embodiment, vehicle position information of the same vehicle is obtained from vehicle information sent by different cameras according to the license plate number of each vehicle, coordinate position information of the vehicle in a stitching coordinate system is obtained according to the vehicle position information, and a complete movement track in a specified area is obtained according to the coordinate position information, so that the tracks of the vehicles in different camera view fields are stitched to obtain a complete movement track, and the method has the characteristics of simplicity and high efficiency.
In another embodiment of the present application, for a scene in which a camera (such as a camera mounted at high altitude) cannot capture a license plate image of a vehicle, vehicle information sent by each camera further includes a time point and a vehicle identifier assigned to the vehicle by the camera; in this embodiment, according to the time point included in the vehicle information and the vehicle identifier assigned to the vehicle by the camera, the coordinate position information belonging to the same target vehicle is determined from the coordinate position information corresponding to the vehicle position information sent by each camera.
Specifically, the step of determining the coordinate position information belonging to the same target vehicle from the coordinate position information corresponding to the vehicle position information sent by each camera according to the time point and the vehicle identifier allocated to the vehicle by the camera includes the following steps C10-C20:
and step C10, aiming at the first camera, the first camera is a camera for sending vehicle information, the vehicle position information which is sent by the first camera at different time points and corresponds to the same vehicle identification is obtained, and the coordinate position information corresponding to the vehicle position information is determined to be the target coordinate position information of the same target vehicle.
In this embodiment, for the sake of convenience of distinction, two adjacent cameras are referred to as a first camera and a second camera, respectively, and monitoring areas of the first camera and the second camera have an overlapping portion.
And step C20, if the vehicle position information sent by the first camera and the adjacent second camera at least one same time point corresponds to the same coordinate position information, and the coordinate position information is one of the determined target coordinate position information, determining a target vehicle identifier corresponding to the vehicle position information from the vehicle information sent by the second camera, and determining the coordinate position information corresponding to the vehicle position information corresponding to the target vehicle identifier sent by the second camera at different time points as the target coordinate position information of the target vehicle.
In this embodiment, each camera allocates a vehicle identifier to a vehicle entering a monitoring area of the vehicle, for example, a vehicle identifier of the same vehicle in a first camera is a, a server receives vehicle position information corresponding to the vehicle identifier a sent by the first camera at different time points, and obtains target coordinate position information of the vehicle according to a coordinate position in a stitching coordinate system corresponding to the vehicle position information; after the vehicle enters the monitoring area of the second camera, the second camera assigns a vehicle identifier B to the vehicle, and when the vehicle is in an overlapping area of the first camera and the second camera, the first camera and the second camera may simultaneously send vehicle position information with the vehicle identifier a and vehicle position information with the vehicle identifier B to the server, for example, the vehicle is in the overlapping area of two cameras at nine to nine points, and the coordinate positions in the stitching coordinate system corresponding to the vehicle position information with the vehicle identifier a and the vehicle position information with the vehicle identifier B uploaded by the two cameras at any time point (for example, zero one minute at nine) in a period of ten minutes from nine to nine points are the same. The server can determine that the vehicle identifier A and the vehicle identifier B represent the same target vehicle according to the fact that the coordinate position information corresponding to the vehicle position information sent by the first camera and the second camera at the same time point is the same, and further the coordinate position information corresponding to the vehicle position information, sent by the second camera at different time points, of which the vehicle identifier B is the target vehicle is also determined as the target coordinate position information of the target vehicle. And when the vehicle continues to run through the monitoring area of the subsequent camera, the coordinate position information of the vehicle can be obtained according to the mode until the vehicle runs out of the parking lot. And finally, obtaining the complete movement track of the vehicle in the parking lot according to the determined coordinate position information of the target vehicle at different time points.
Illustratively, the plurality of cameras described above includes: the parking space monitoring system comprises an entrance camera arranged at an entrance of a designated area (such as a parking lot) and a parking space camera arranged in the designated area and used for monitoring a parking space. Referring to a schematic view of a parking lot shown in fig. 4, the parking lot is provided with an entrance and an exit, and an entrance camera is arranged at the entrance and the exit of the parking lot; if there are a plurality of exits and entrances, it is necessary to provide an entrance camera at each exit and entrance, respectively. The parking space cameras can be arranged on a square lighthouse in a square parking lot, and the number of the parking space cameras is multiple, and the parking space cameras are respectively used for acquiring video images of different areas of the parking lot; and the parking space camera needs to meet the requirement of being capable of acquiring clear images of monitored areas in a high-altitude erecting state.
In the above embodiment, the server configures the unique identifier for the coordinate position information of the vehicle in the stitching coordinate system, and the configuration may be performed after the target vehicle enters the designated area for the first time. For example, after receiving vehicle position information of a certain vehicle sent by the entrance camera, the server assigns a unique identifier in the stitching coordinate system to coordinate position information in the stitching coordinate system corresponding to the vehicle position information.
In an embodiment of the present application, the vehicle information sent by each camera further includes vehicle event information, where the vehicle event information at least includes one of the following information: the method comprises the steps of entering the designated area, entering the designated position in the designated area, parking, stopping and starting the vehicle at the designated position, leaving the designated position in the designated area and leaving the designated area.
In this embodiment, the method further includes:
and acquiring each piece of vehicle event information of the target vehicle from the vehicle information transmitted by each camera.
And correspondingly storing the moving track and each piece of vehicle event information.
The event information includes: pictures and descriptive information.
For example, the parking space camera analyzes whether an event that the current vehicle enters a parking lot, parks in the parking lot, starts in the parking lot, leaves the parking lot or drives away from the parking lot occurs according to a moving track of the vehicle, which may be a moving track in a calibration coordinate system, and if so, a picture of the vehicle is captured during the event occurrence process, and description information of the event is generated, where the description information of the event may be: entering a parking lot, parking, etc. The picture and the description information are transmitted to a server. The server stores the event information locally so as to facilitate the user to view and evidence.
For example, the server may also process and store the above-mentioned pictures, such as stitching a close-up image of the vehicle with the pictures of the vehicle captured during the time to obtain a complete forensic image.
Optionally, after determining that the vehicle target drives away from the parking lot, the server deletes all stored data of the target vehicle to release the storage space.
Referring to fig. 5, in an embodiment of the present application, there is further provided a vehicle tracking monitoring apparatus, including:
a receiving module 501, configured to receive vehicle information sent by each camera in a designated area, where the vehicle information sent by each camera at least includes: vehicle position information and first information for identifying the same vehicle;
a determining module 502, configured to determine coordinate position information corresponding to the position information of each vehicle in the created stitching coordinate system;
the identifying module 503 is configured to identify target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information, and determine a moving trajectory of the target vehicle in the designated area according to the target coordinate position information.
Optionally, the vehicle information sent by each camera further includes: a camera identification; the determining module 502 is specifically configured to:
aiming at the vehicle information sent by each camera, determining a corresponding area range in the splicing coordinate system according to the camera identification carried by the vehicle information;
and determining coordinate position information corresponding to the vehicle position information in the vehicle information according to the position information of the area range in the splicing coordinate system.
Optionally, the stitching coordinate system is composed of calibration coordinate systems of the cameras; the vehicle position information includes: calibration coordinates of the vehicle in a calibration coordinate system of the camera;
the determining module 502 is specifically configured to determine, according to the position information of the area range in the stitching coordinate system, coordinate position information corresponding to vehicle position information in the vehicle information by:
aiming at the vehicle information sent by each camera, finding a target calibration coordinate system matched with the camera identification carried by the vehicle information in the calibration coordinate systems of the cameras stored locally, and determining a target area block where the target calibration coordinate system is located as the area range;
and determining coordinate position information corresponding to the vehicle position information in the splicing coordinate system according to the position information of the area range in the splicing coordinate system and the calibration coordinates of the vehicle.
Optionally, the first information includes: a license plate identifier corresponding to the vehicle position information; the identifying module 503 is specifically configured to:
determining the position information of each vehicle corresponding to the same license plate identifier;
and determining coordinate position information of each piece of vehicle position information in the splicing coordinate system as the target coordinate position information.
Optionally, the first information includes: the time point and the vehicle identification distributed to the vehicle by the camera; the identifying module 503 is specifically configured to:
and determining coordinate position information belonging to the same target vehicle from the coordinate position information corresponding to the vehicle position information sent by each camera according to the time point included in the vehicle information and the vehicle identification distributed to the vehicle by the camera.
Optionally, the identifying module 503 is specifically configured to:
for a first camera, the first camera is a camera for sending vehicle information, the vehicle position information which is sent by the first camera at different time points and corresponds to the same vehicle identifier is obtained, and the coordinate position information corresponding to the vehicle position information is determined as the target coordinate position information of the same target vehicle;
if the vehicle position information sent by the first camera and the adjacent second camera at least one same time point corresponds to the same coordinate position information, and the coordinate position information is one of the determined target coordinate position information, determining a target vehicle identifier corresponding to the vehicle position information from the vehicle information sent by the second camera, determining the vehicle position information corresponding to the target vehicle identifier sent by the second camera at different time points, and determining the coordinate position information corresponding to each determined vehicle position information as the target coordinate position information of the target vehicle.
Optionally, the vehicle information sent by each camera further includes vehicle event information, where the vehicle event information at least includes one of the following information: event information entering the designated area, event information entering a designated position in the designated area, event information of parking, event information leaving the designated position in the designated area, and event information leaving the designated area;
the above-mentioned device still includes:
a storage module (not shown in the figure) for acquiring each piece of vehicle event information of the target vehicle from the vehicle information transmitted by each camera; and correspondingly storing the moving track and each piece of vehicle event information.
An embodiment of the present application further provides a system for tracking and monitoring a vehicle, including: the server and the plurality of monitoring cameras are used for monitoring the vehicles by applying the vehicle tracking and monitoring method; the monitoring cameras are respectively used for acquiring video images of different areas of a specified area and identifying a target vehicle from the video images; determining vehicle position information of the target vehicle according to the corresponding relation between the position of the target vehicle in the video image and the pixel coordinate system of the video image and the calibration coordinate system of the camera; transmitting vehicle information including at least the vehicle position information to the server;
the server is used for receiving vehicle information sent by cameras arranged in a designated area, and the vehicle information sent by each camera at least comprises: vehicle position information; determining coordinate position information corresponding to the position information of each vehicle in the created splicing coordinate system; and identifying target coordinate position information belonging to the same target vehicle from the coordinate position information, and determining the movement track of the target vehicle in the specified area according to the target coordinate position information.
The cameras comprise an entrance camera and a parking space camera, each camera calculates the pixel coordinate of the vehicle in an image coordinate system, the pixel coordinate of the vehicle is converted into a calibration coordinate in a calibration coordinate system, and the calibration coordinate of the vehicle is uploaded to the server in real time.
For example, fig. 6 is a schematic diagram of a parking space camera calibration coordinate system according to an exemplary embodiment of the present application. The calibration coordinate system is determined by a coordinate origin and a calibration line, the calibration line is determined by a worker in the process of calibrating the camera, after the parking space camera is installed, regional boundary lines are configured according to the actual parking space distribution condition of the parking lot, and two adjacent regional boundary lines form a lane; referring to fig. 6, the calibration lines of the parking space camera in this embodiment include two horizontal calibration lines and two vertical calibration lines, and the calibration lines are disposed in the overlapping area of the images collected by the adjacent cameras and keep a horizontal state or a vertical state; the two horizontal calibration lines include: a first calibrated line 401 and a second calibrated line 404, the two vertical calibrated lines comprising: a third calibration line 403 and a fourth calibration line 402, wherein the third calibration line 403 coincides with the leftmost region boundary 405 in the figure. The calibration coordinate system of the parking space camera is determined by using the intersection of the third calibration line 403 and the second calibration line 404 as the origin, setting the coordinates of the intersection of the fourth calibration line 402 and the first calibration line 401 as (1, 1), and determining the calibration coordinate system.
Optionally, before calculating the abscissa x in the calibration coordinate system, a regional boundary is arranged on the image in the horizontal direction, where the staff combines the data in the monitoring pictureThe number of the parking space rows and the number of the aisles in the actual monitored area may be divided, and then n lanes are divided (taking fig. 6 as an example, a first parking space row, a lane 3, a second parking space row, a third parking space row and an aisle 2 are respectively divided into one lane from the left side, and 4 lanes are shown in the figure). When the abscissa of the leftmost area boundary (coinciding with the third calibration line 403) is taken as 0 and the abscissa of the rightmost area boundary (coinciding with the fourth calibration line 402) is taken as 1, the width of each lane is calculated asWhen the vehicle target is in i (i is counted from 0) lanes, the pixel distance between the center of the vehicle target and the left area boundary of the lane is a, and the pixel width of the lane is b, then the abscissa x of the vehicle target in the calibration coordinate system can be calculated, as shown in the following formula (1):
for the ordinate y of the vehicle object, the ratio of the vertical distance of the vehicle object to the first calibration line and the distance between the first calibration line and the second calibration line is defined.
First, according to the principle of imaging, the following relational expression (2) is satisfied for the lens center-to-target distance D, the lens focal length f, the target imaging width W, and the target actual width W:
W*f=D*w (2)
fig. 7 is an imaging cross-sectional view of an exemplary embodiment of the present application, taking a camera as an example, where OA is a standing upright, point C is a projected point of a first calibration line in the cross-sectional view, point O is a position of the camera, point D is a projected point of a lane width line of a lane where a target point (a key point of a vehicle target) is located, point G is a projected point of a second calibration line in the cross-sectional view, L1 is a vertical distance from the upright to the first calibration line, L is a vertical distance from the upright to the point D, and L2 is a vertical distance from the upright to the point G. When the projection points of the lane width line are respectively at the point C, the point D and the point G, the following formula (3) is satisfied:
W*f=OD*p=OG*q=OD*r (3)
wherein, p, q, r are the pixel widths of the corresponding lanes in the image when the lane width lines are respectively at the position C, G, D; the lane width line is a straight line passing through the target point and perpendicular to the boundary of the area of the lane.
in addition, the following formula (4) can be obtained by substituting L2-L1 + d-kd + d, L1+ CD-kd + yd into the above equation,
wherein d represents the vertical distance between the first calibration line and the second calibration line, k is the ratio of the vertical distance from the first calibration line to the vertical rod to d, and the parameter m is as the following formula (5),
therefore, after the camera is erected and the calibration line and the regional boundary are set, the k value, p and q can be determined, and then the vertical coordinate y of the vehicle in the calibration coordinate system can be calculated by acquiring the pixel width of the lane where the target is located in real time.
The camera can then convert the pixel coordinates of each point of the image to coordinates in the calibration coordinate system in the manner described above.
Similar calibration and conversion are carried out on the calibration coordinate system of the entrance camera and the exit camera, but the field of view range of the entrance camera and the exit camera may be obviously smaller than that of the parking space camera, and the calibration coordinate system is only one part of the calibration coordinate system of a certain parking space camera; therefore, when calculating the coordinates of the target vehicle in the stitching coordinate system, preprocessing is required to be performed first, so as to make the calibration coordinates of the entrance camera and the exit camera consistent with the scale of the calibration coordinates of the parking space camera.
Illustratively, the distance between the horizontal calibration lines of the entrance camera and the exit camera is h, and the distance between the vertical calibration lines is w; the distance between the horizontal calibration lines of the parking space cameras is H, the distance between the vertical calibration lines is W, and all the parking space cameras adopt the same parameters (which are required to be set during erection and installation); as shown in fig. 8, a coordinate system XY corresponding to the point C, the point B, and the point D is a calibration coordinate system of the entrance camera, and the coordinate system XY is a calibration coordinate system of the parking space camera. And (2) setting the coordinates of the point B in the coordinate system of the entrance camera as (X, Y), the coordinates in the coordinate system of the parking space camera as (X, Y), and the coordinates of the origin C of the calibration coordinate system of the entrance camera in the coordinate system of the parking space camera as (p, q), wherein X is p-Y H/W, and Y is q-X W/H, so as to obtain the conversion relation between the calibration coordinates of the entrance camera and the calibration coordinates of the parking space camera, and the calibration coordinates of the entrance camera and the calibration coordinates of the parking space camera can be unified through the conversion relation.
It should be noted that, the above is only an example, and when the calibration coordinates of the entrance camera and the calibration coordinates of the parking space camera are different from the example, the corresponding calculation formula needs to be adjusted correspondingly; this is not limited in this application.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the vehicle tracking monitoring method according to any of the embodiments described above.
Referring to fig. 9, in another embodiment of the present application, there is provided a computer device, including a memory 802, a processor 801 and a computer program stored on the memory 802 and executable on the processor 801, where the memory 802 is connected to the processor 801 through a communication bus 803, and the processor 801 executes the computer program to implement the steps of the vehicle tracking monitoring method according to any one of the embodiments.
The above device embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (11)
1. A vehicle tracking and monitoring method is applied to a server, a splicing coordinate system is stored in the server in advance, the splicing coordinate system is composed of calibration coordinate systems of cameras, and the method comprises the following steps:
receiving vehicle information sent by cameras arranged in a designated area, wherein the vehicle information sent by each camera at least comprises: vehicle position information and first information for identifying the same vehicle; the vehicle position information includes: calibration coordinates of the vehicle in a calibration coordinate system of the camera;
determining coordinate position information corresponding to the position information of each vehicle in the splicing coordinate system according to the relative positions of the calibration coordinate of the vehicle in the calibration coordinate system of the camera and the calibration coordinate system of the camera sending the vehicle information in the splicing coordinate system;
and identifying target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information, and determining the movement track of the target vehicle in the designated area according to the target coordinate position information.
2. The method of claim 1, wherein the vehicle information transmitted by each camera further comprises: a camera identification;
determining coordinate position information corresponding to the vehicle position information in the splicing coordinate system according to the relative positions of the vehicle in the calibration coordinate system of the camera and the calibration coordinate system of the camera sending the vehicle information in the splicing coordinate system, wherein the determining comprises the following steps:
aiming at the vehicle information sent by each camera, determining a corresponding area range in the splicing coordinate system according to the camera identification carried by the vehicle information;
and determining coordinate position information corresponding to the vehicle position information in the vehicle information according to the calibration coordinates of the vehicle in the calibration coordinate system of the camera and the position information of the area range in the splicing coordinate system.
3. The method of claim 2, wherein the stitching coordinate system consists of a calibration coordinate system for each camera, comprising: the calibration coordinate system of each camera corresponds to one area block, and the splicing coordinate system is formed by splicing the area blocks corresponding to the cameras;
the determining, according to the calibration coordinates of the vehicle in the calibration coordinate system of the camera and the position information of the area range in the stitching coordinate system, coordinate position information corresponding to the vehicle position information in the vehicle information includes:
aiming at the vehicle information sent by each camera, finding a target calibration coordinate system matched with the camera identification carried by the vehicle information in the calibration coordinate systems of the cameras stored locally, and determining a target area block where the target calibration coordinate system is located as the area range;
and determining coordinate position information corresponding to the vehicle position information in the splicing coordinate system according to the position information of the area range in the splicing coordinate system and the calibration coordinates of the vehicle.
4. The method of claim 1, wherein the first information comprises: a license plate identifier corresponding to the vehicle position information;
the identifying of the target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information includes:
determining the position information of each vehicle corresponding to the same license plate identifier;
and determining coordinate position information of each piece of vehicle position information in the splicing coordinate system as the target coordinate position information.
5. The method of claim 1, wherein the first information comprises: the time point and the vehicle identification distributed to the vehicle by the camera;
the identifying of the target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information includes:
and determining coordinate position information belonging to the same target vehicle from the coordinate position information corresponding to the vehicle position information sent by each camera according to the time point included in the vehicle information and the vehicle identification distributed to the vehicle by the camera.
6. The method according to claim 5, wherein the determining coordinate position information belonging to the same target vehicle from the coordinate position information corresponding to the vehicle position information sent by each camera according to the time point included in the vehicle information and the vehicle identifier allocated to the vehicle by the camera comprises:
for a first camera, the first camera is a camera for sending vehicle information, the vehicle position information which is sent by the first camera at different time points and corresponds to the same vehicle identifier is obtained, and the coordinate position information corresponding to the vehicle position information is determined as the target coordinate position information of the same target vehicle;
if the vehicle position information sent by the first camera and the adjacent second camera at least one same time point corresponds to the same coordinate position information, and the coordinate position information is one of the determined target coordinate position information, determining a target vehicle identifier corresponding to the vehicle position information from the vehicle information sent by the second camera, determining the vehicle position information corresponding to the target vehicle identifier sent by the second camera at different time points, and determining the coordinate position information corresponding to each determined vehicle position information as the target coordinate position information of the target vehicle.
7. The method of claim 1, wherein the vehicle information transmitted by each camera further comprises vehicle event information, the vehicle event information comprising at least one of: event information entering the designated area, event information entering a designated position in the designated area, event information of parking, event information leaving the designated position in the designated area, and event information leaving the designated area;
the method further comprises the following steps:
acquiring vehicle event information of the target vehicle from the vehicle information sent by the cameras;
and correspondingly storing the moving track and each piece of vehicle event information.
8. A vehicle tracking monitoring device, comprising:
the receiving module is used for receiving vehicle information sent by each camera arranged in the designated area, and the vehicle information sent by each camera at least comprises: vehicle position information and first information for identifying the same vehicle; the vehicle position information includes: calibration coordinates of the vehicle in a calibration coordinate system of the camera;
the determining module is used for determining coordinate position information corresponding to the vehicle position information in a splicing coordinate system according to the calibration coordinates of the vehicle in a calibration coordinate system of a camera and the relative position of the calibration coordinate system of the camera sending the vehicle information in the splicing coordinate system; the splicing coordinate system consists of calibration coordinate systems of all cameras;
and the identification module is used for identifying target coordinate position information belonging to the same target vehicle from the coordinate position information according to the first information and determining the moving track of the target vehicle in the designated area according to the target coordinate position information.
9. A vehicle monitoring system, characterized in that the system comprises a server and a plurality of monitoring cameras for vehicle tracking monitoring applying the method according to any one of claims 1-6;
the monitoring cameras are respectively used for acquiring video images of different areas of a specified area and identifying a target vehicle from the video images; determining vehicle position information of the target vehicle based on a corresponding relation between a pixel coordinate system of the video image and a calibration coordinate system of the camera according to the position of the target vehicle in the video image; transmitting vehicle information including at least the vehicle position information to the server;
the server is pre-stored with a splicing coordinate system, the splicing coordinate system is composed of calibration coordinate systems of all cameras, the server is used for receiving vehicle information sent by all cameras in a designated area, and the vehicle information sent by each camera at least comprises: vehicle position information, the vehicle position information including: calibration coordinates of the vehicle in a calibration coordinate system of the camera; determining coordinate position information corresponding to the position information of each vehicle in the splicing coordinate system according to the relative positions of the calibration coordinate of the vehicle in the calibration coordinate system of the camera and the calibration coordinate system of the camera sending the vehicle information in the splicing coordinate system; and identifying target coordinate position information belonging to the same target vehicle from the coordinate position information, and determining the movement track of the target vehicle in the specified area according to the target coordinate position information.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-7 are implemented when the program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910412661.8A CN111951598B (en) | 2019-05-17 | 2019-05-17 | Vehicle tracking monitoring method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910412661.8A CN111951598B (en) | 2019-05-17 | 2019-05-17 | Vehicle tracking monitoring method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111951598A CN111951598A (en) | 2020-11-17 |
CN111951598B true CN111951598B (en) | 2022-04-26 |
Family
ID=73336149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910412661.8A Active CN111951598B (en) | 2019-05-17 | 2019-05-17 | Vehicle tracking monitoring method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111951598B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112991734B (en) * | 2021-03-02 | 2022-09-02 | 英博超算(南京)科技有限公司 | Parking space state detection system of visual parking space |
CN113873200B (en) * | 2021-09-26 | 2024-02-02 | 珠海研果科技有限公司 | Image identification method and system |
CN116091899B (en) * | 2023-04-12 | 2023-06-23 | 中国铁塔股份有限公司 | Vehicle tracking method, system, device, electronic equipment and readable storage medium |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1790945A1 (en) * | 2004-09-15 | 2007-05-30 | Matsushita Electric Industrial Co., Ltd. | Route guidance device |
CN101950426A (en) * | 2010-09-29 | 2011-01-19 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
CN102230798A (en) * | 2011-04-12 | 2011-11-02 | 清华大学 | Portable quick staff-free investigation system of traffic accident scene based on binocular vision |
CN202339636U (en) * | 2011-11-01 | 2012-07-18 | 杭州海康威视系统技术有限公司 | Image acquiring device |
WO2013155735A1 (en) * | 2012-04-16 | 2013-10-24 | Li Bo | Off-screen touch control interaction system having projection point coordinate indication of detected proximity |
CN103697864A (en) * | 2013-12-27 | 2014-04-02 | 武汉大学 | Narrow-view-field double-camera image fusion method based on large virtual camera |
CN103826103A (en) * | 2014-02-27 | 2014-05-28 | 浙江宇视科技有限公司 | Cruise control method for tripod head video camera |
CN104634246A (en) * | 2015-02-03 | 2015-05-20 | 李安澜 | Floating type stereo visual measuring system and measuring method for coordinates of object space |
CN106485736A (en) * | 2016-10-27 | 2017-03-08 | 深圳市道通智能航空技术有限公司 | A kind of unmanned plane panoramic vision tracking, unmanned plane and control terminal |
CN106846870A (en) * | 2017-02-23 | 2017-06-13 | 重庆邮电大学 | The intelligent parking system and method for the parking lot vehicle collaboration based on centralized vision |
CN107507298A (en) * | 2017-08-11 | 2017-12-22 | 南京阿尔特交通科技有限公司 | A kind of multimachine digital video vehicle operation data acquisition method and device |
CN108288386A (en) * | 2018-01-29 | 2018-07-17 | 深圳信路通智能技术有限公司 | Road-surface concrete tracking based on video |
CN108875458A (en) * | 2017-05-15 | 2018-11-23 | 杭州海康威视数字技术股份有限公司 | Detection method, device, electronic equipment and the video camera that vehicular high beam lamp is opened |
CN109099883A (en) * | 2018-06-15 | 2018-12-28 | 哈尔滨工业大学 | The big visual field machine vision metrology of high-precision and caliberating device and method |
CN109141432A (en) * | 2018-09-19 | 2019-01-04 | 西安科技大学 | A kind of indoor positioning air navigation aid assisted based on image space and panorama |
CN109525790A (en) * | 2017-09-20 | 2019-03-26 | 杭州海康威视数字技术股份有限公司 | Video file generation method and system, playback method and device |
CN109598674A (en) * | 2017-09-30 | 2019-04-09 | 杭州海康威视数字技术股份有限公司 | A kind of image split-joint method and device |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MY147105A (en) * | 2003-09-03 | 2012-10-31 | Stratech Systems Ltd | Apparatus and method for locating, identifying and tracking vehicles in a parking area |
CN101277429B (en) * | 2007-03-27 | 2011-09-07 | 中国科学院自动化研究所 | Method and system for amalgamation process and display of multipath video information when monitoring |
KR101516850B1 (en) * | 2008-12-10 | 2015-05-04 | 뮤비 테크놀로지스 피티이 엘티디. | Creating a new video production by intercutting between multiple video clips |
CN102915638A (en) * | 2012-10-07 | 2013-02-06 | 复旦大学 | Surveillance video-based intelligent parking lot management system |
CN103903246A (en) * | 2012-12-26 | 2014-07-02 | 株式会社理光 | Object detection method and device |
CN104182747A (en) * | 2013-05-28 | 2014-12-03 | 株式会社理光 | Object detection and tracking method and device based on multiple stereo cameras |
CN104902246B (en) * | 2015-06-17 | 2020-07-28 | 浙江大华技术股份有限公司 | Video monitoring method and device |
CN104954747B (en) * | 2015-06-17 | 2020-07-07 | 浙江大华技术股份有限公司 | Video monitoring method and device |
KR20170000110A (en) * | 2015-06-23 | 2017-01-02 | (주)모가씨앤디 | System and method for parking management by tracking position of vehicle |
CN105847751A (en) * | 2016-04-14 | 2016-08-10 | 清华大学 | Map based global monitoring method and apparatus |
CN106652063A (en) * | 2016-12-20 | 2017-05-10 | 北京速通科技有限公司 | Free-flow electronic charging method and system for bidirectional lane |
US10592771B2 (en) * | 2016-12-30 | 2020-03-17 | Accenture Global Solutions Limited | Multi-camera object tracking |
CN107529665A (en) * | 2017-07-06 | 2018-01-02 | 新华三技术有限公司 | Car tracing method and device |
CN107134172A (en) * | 2017-07-12 | 2017-09-05 | 嘉兴寰知科技有限公司 | Parking monitors identifying system and its method |
CN107767673B (en) * | 2017-11-16 | 2019-09-27 | 智慧互通科技有限公司 | A kind of Roadside Parking management method based on multiple-camera, apparatus and system |
CN108765943A (en) * | 2018-05-30 | 2018-11-06 | 深圳市城市公共安全技术研究院有限公司 | Intelligent vehicle monitoring method, monitoring system and server |
CN108830251A (en) * | 2018-06-25 | 2018-11-16 | 北京旷视科技有限公司 | Information correlation method, device and system |
CN108876821B (en) * | 2018-07-05 | 2019-06-07 | 北京云视万维科技有限公司 | Across camera lens multi-object tracking method and system |
-
2019
- 2019-05-17 CN CN201910412661.8A patent/CN111951598B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1790945A1 (en) * | 2004-09-15 | 2007-05-30 | Matsushita Electric Industrial Co., Ltd. | Route guidance device |
CN101950426A (en) * | 2010-09-29 | 2011-01-19 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
CN102230798A (en) * | 2011-04-12 | 2011-11-02 | 清华大学 | Portable quick staff-free investigation system of traffic accident scene based on binocular vision |
CN202339636U (en) * | 2011-11-01 | 2012-07-18 | 杭州海康威视系统技术有限公司 | Image acquiring device |
WO2013155735A1 (en) * | 2012-04-16 | 2013-10-24 | Li Bo | Off-screen touch control interaction system having projection point coordinate indication of detected proximity |
CN103697864A (en) * | 2013-12-27 | 2014-04-02 | 武汉大学 | Narrow-view-field double-camera image fusion method based on large virtual camera |
CN103826103A (en) * | 2014-02-27 | 2014-05-28 | 浙江宇视科技有限公司 | Cruise control method for tripod head video camera |
CN104634246A (en) * | 2015-02-03 | 2015-05-20 | 李安澜 | Floating type stereo visual measuring system and measuring method for coordinates of object space |
CN106485736A (en) * | 2016-10-27 | 2017-03-08 | 深圳市道通智能航空技术有限公司 | A kind of unmanned plane panoramic vision tracking, unmanned plane and control terminal |
CN106846870A (en) * | 2017-02-23 | 2017-06-13 | 重庆邮电大学 | The intelligent parking system and method for the parking lot vehicle collaboration based on centralized vision |
CN108875458A (en) * | 2017-05-15 | 2018-11-23 | 杭州海康威视数字技术股份有限公司 | Detection method, device, electronic equipment and the video camera that vehicular high beam lamp is opened |
CN107507298A (en) * | 2017-08-11 | 2017-12-22 | 南京阿尔特交通科技有限公司 | A kind of multimachine digital video vehicle operation data acquisition method and device |
CN109525790A (en) * | 2017-09-20 | 2019-03-26 | 杭州海康威视数字技术股份有限公司 | Video file generation method and system, playback method and device |
CN109598674A (en) * | 2017-09-30 | 2019-04-09 | 杭州海康威视数字技术股份有限公司 | A kind of image split-joint method and device |
CN108288386A (en) * | 2018-01-29 | 2018-07-17 | 深圳信路通智能技术有限公司 | Road-surface concrete tracking based on video |
CN109099883A (en) * | 2018-06-15 | 2018-12-28 | 哈尔滨工业大学 | The big visual field machine vision metrology of high-precision and caliberating device and method |
CN109141432A (en) * | 2018-09-19 | 2019-01-04 | 西安科技大学 | A kind of indoor positioning air navigation aid assisted based on image space and panorama |
Non-Patent Citations (5)
Title |
---|
High-Precision Camera Localization in Scenes with Repetitive Patterns;Liu, Xiaobai等;《ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY》;20181130;第9卷(第6期);全文 * |
Remote Monitoring of the Calibration of a System of Tracking Arrays;R. Read;《IEEE Journal of Oceanic Engineering》;19871231;第12卷(第1期);全文 * |
交通路口监控视频跨视域多目标跟踪的可视化;刘彩虹;《计算机学报》;20180131;第41卷(第01期);221-235 * |
视频监控系统中的大视场拼接技术;冯昊等;《第十四届全国图象图形学学术会议论文集》;20080501;548-552 * |
采用光学定位跟踪技术的三维数据拼接方法;韩建栋;《光学精密工程》;20090115;第17卷(第01期);45-51 * |
Also Published As
Publication number | Publication date |
---|---|
CN111951598A (en) | 2020-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110174093B (en) | Positioning method, device, equipment and computer readable storage medium | |
CN111951598B (en) | Vehicle tracking monitoring method, device and system | |
US20100103266A1 (en) | Method, device and computer program for the self-calibration of a surveillance camera | |
US7321386B2 (en) | Robust stereo-driven video-based surveillance | |
EP1560160A2 (en) | A multiple camera system for obtaining high resolution images of objects | |
CN111145545A (en) | Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning | |
CN108877269B (en) | Intersection vehicle state detection and V2X broadcasting method | |
CN110795813A (en) | Traffic simulation method and device | |
CN205194072U (en) | Linkage snapshot system and integration camera are detected to violating regulations parking | |
CN110879071B (en) | High-precision positioning system and positioning method based on vehicle-road cooperation | |
WO2014172708A1 (en) | Pedestrian right of way monitoring and reporting system and method | |
CN105516654A (en) | Scene-structure-analysis-based urban monitoring video fusion method | |
CN112950717B (en) | Space calibration method and system | |
CN104809876A (en) | Illegal vehicle parking detection linkage capture system and integrated camera | |
CN111753634B (en) | Traffic event detection method and equipment | |
CN111860352A (en) | Multi-lens vehicle track full-tracking system and method | |
CN107316463A (en) | A kind of method and apparatus of vehicle monitoring | |
CN112597807A (en) | Violation detection system, method and device, image acquisition equipment and medium | |
CN113409194A (en) | Parking information acquisition method and device and parking method and device | |
CN110225236A (en) | For the method, apparatus and video monitoring system of video monitoring system configuration parameter | |
Alamry et al. | Using single and multiple unmanned aerial vehicles for microscopic driver behaviour data collection at freeway interchange ramps | |
EP3349201B1 (en) | Parking assist method and vehicle parking assist system | |
CN115731224B (en) | License plate detection method and device, terminal equipment and storage medium | |
CN115019546B (en) | Parking prompt method and device, electronic equipment and storage medium | |
Bravo et al. | Outdoor vacant parking space detector for improving mobility in smart cities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |