CN111914592B - Multi-camera combined evidence obtaining method, device and system - Google Patents
Multi-camera combined evidence obtaining method, device and system Download PDFInfo
- Publication number
- CN111914592B CN111914592B CN201910380772.5A CN201910380772A CN111914592B CN 111914592 B CN111914592 B CN 111914592B CN 201910380772 A CN201910380772 A CN 201910380772A CN 111914592 B CN111914592 B CN 111914592B
- Authority
- CN
- China
- Prior art keywords
- monitoring target
- target
- close
- camera
- appointed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The embodiment of the application provides a multi-camera combined evidence obtaining method, device and system, wherein the method comprises the following steps: acquiring a panoramic video stream acquired by a gun camera, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event; if the monitoring target triggers a preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame is used for triggering the preset detection event by the designated monitoring target, so as to obtain a panoramic evidence image, and the designated monitoring target is used as the monitoring target for triggering the preset detection event; after a preset period of time is predicted, the predicted position of the monitoring target in the gun camera image is specified; determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera; and acquiring a close-up evidence image at the PT coordinate of the target through the dome camera. The multi-camera combined evidence obtaining method can obtain the close-up evidence image while obtaining the panoramic evidence image.
Description
Technical Field
The application relates to the technical field of video monitoring, in particular to a multi-camera combined evidence obtaining method, device and system.
Background
With the rapid development of intelligent traffic, a evidence collection mode based on a video monitoring technology is widely applied to traffic scenes. In particular to a plurality of zones of crossroads, curves and accidents, the control camera can play a role in deterrence, and is convenient for evidence collection after the accident.
In the related art, a gun camera is arranged at a designated monitoring point to comprehensively monitor a monitoring scene, but because the gun camera has a large visual angle, local details of a gun camera image are easy to distort when amplified, so that illegal details are not captured in place, misjudgment or insufficient evidence after the illegal details can possibly occur, and therefore, the acquisition of a close-up evidence image is hoped.
Disclosure of Invention
The embodiment of the application aims to provide a multi-camera combined evidence obtaining method, device and system, so as to obtain a close-up evidence image while obtaining a panoramic evidence image. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a multi-camera joint evidence obtaining method, where the method includes:
acquiring a panoramic video stream acquired by a gun camera, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event;
if the monitoring target triggers the preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame triggers the preset detection event, of a designated monitoring target to obtain a panoramic evidence image, and the designated monitoring target is a monitoring target triggering the preset detection event;
Predicting the predicted position of the specified monitoring target in the gun camera image after a preset period;
determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera;
and acquiring a close-up evidence image at the PT coordinate of the target through the dome camera.
In a second aspect, an embodiment of the present application provides a multi-camera combined evidence obtaining device, including:
the video stream detection module is used for acquiring a panoramic video stream acquired by a gun camera and judging whether a monitoring target in the panoramic video stream triggers a preset detection event;
the panoramic evidence acquisition module is used for acquiring a video frame of the panoramic video stream, which is used for triggering the preset detection event, by a specified monitoring target if the monitoring target triggers the preset detection event, so as to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target for triggering the preset detection event;
the position prediction module is used for predicting the predicted position of the specified monitoring target in the gun camera image after a preset period;
the target coordinate determining module is used for determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera;
And the close-up evidence acquisition module is used for acquiring a close-up evidence image at the PT coordinate of the target through the dome camera.
In a third aspect, an embodiment of the present application provides a multi-camera combined forensic system, where the system includes: gun camera and ball camera; the bolt works in operation realize the multi-camera combined evidence obtaining method in any one of the first aspects.
In a fourth aspect, an embodiment of the present application provides a multi-camera combined evidence obtaining system, including: server, rifle bolt and ball machine; the server implements the multi-camera joint forensic method of any one of the above first aspects when running.
According to the multi-camera combined evidence obtaining method, device and system provided by the embodiment of the application, the panoramic video stream collected by the gun camera is obtained, and whether a monitoring target exists in the panoramic video stream or not is judged to trigger a preset detection event; if the monitoring target triggers a preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame is used for triggering the preset detection event by the designated monitoring target, so as to obtain a panoramic evidence image, and the designated monitoring target is used as the monitoring target for triggering the preset detection event; after a preset period of time is predicted, the predicted position of the monitoring target in the gun camera image is specified; determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera; and acquiring a close-up evidence image at the PT coordinate of the target through the dome camera. The panoramic evidence image can be obtained at the same time, the close-up evidence image is obtained, the predicted position of the target after the preset period is predicted, the linkage time of the dome camera is ensured, and the success rate of the close-up evidence image acquisition is improved. Of course, it is not necessary for any one product or method of practicing the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a first schematic diagram of a multi-camera combined evidence obtaining method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a horizontal direction conversion of coordinate conversion according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a coordinate transformation in a vertical direction according to an embodiment of the present application;
FIG. 4 is a first schematic diagram of a multi-camera combined evidence obtaining method according to an embodiment of the present application;
FIG. 5 is a schematic representation of a composite evidence image according to an embodiment of the present application;
FIG. 6 is a first schematic diagram of a multiple camera combined forensic system according to an embodiment of the present application;
FIG. 7 is a second schematic diagram of a multiple camera combined forensic system according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a multi-camera combined evidence obtaining apparatus according to an embodiment of the present application;
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In order to obtain a close-up evidence image of a violation event, an embodiment of the present application provides a multi-camera combined evidence obtaining method, referring to fig. 1, including:
s101, acquiring a panoramic video stream acquired by a gun camera, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event.
The multi-camera combined evidence obtaining method in the embodiment of the application can be realized through electronic equipment, the electronic equipment comprises a processor and a memory, wherein the memory is used for storing a computer program, and the multi-camera combined evidence obtaining method in the embodiment of the application is realized when the processor executes the computer program stored on the memory. Specifically, the electronic device may be a server, a hard disk video recorder, a gun camera, or the like.
The electronic equipment acquires the panoramic video stream acquired by the gun camera in real time, and analyzes the panoramic video stream by utilizing a computer vision technology (such as a pre-trained convolutional neural network and the like) to judge whether a monitoring target triggers a preset detection event. The monitoring target and the preset detection event can be set according to actual conditions, for example, the monitoring target is a pedestrian, and the preset detection event is a red light running or a road crossing, etc.
Optionally, the panoramic video stream includes lane line information and lane direction information, the monitoring target is a vehicle, and the preset detection event includes: one or more of stopping, reversing, pressing lines, turning around and the motor vehicle occupies a non-motor vehicle lane and changing the lane in a violation way. The electronic device can analyze the panoramic video stream through a pre-trained convolutional neural network, so that whether a vehicle triggers a preset detection event in the panoramic video stream is judged.
S102, if the monitoring target triggers the preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame triggers the preset detection event, of an appointed monitoring target in the panoramic video stream, and obtaining a panoramic evidence image, wherein the appointed monitoring target is the monitoring target triggering the preset detection event.
If it is detected that the monitoring target triggers the preset detection event in the panoramic video stream, for the monitoring target (hereinafter referred to as a specified monitoring target) triggering the preset detection event, video frames of the specified monitoring target triggering the preset detection event are extracted from the panoramic video stream, and in general, multiple continuous video frames are extracted, where all the video frames can be used as panoramic evidence images, and a specified number of key frames can be extracted from the video frames to be used as panoramic evidence images.
S103, predicting the predicted position of the specified monitoring target in the gun camera image after the preset period.
The electronic equipment predicts the position of the appointed monitoring target in the gun camera image of the gun camera for collecting the panoramic video stream after a preset period through a deep learning technology or a related track prediction algorithm and the like according to the movement condition of the appointed monitoring target in the panoramic video stream. The preset time period is set according to the linkage time of the ball machine in actual conditions, and is larger than the linkage time of the ball machine, so that the ball machine is guaranteed to have enough time to turn to a designated monitoring area, and the preset time period is not suitable to be set too long so as not to reduce the accuracy of the predicted position.
Optionally, the predicting the predicted position of the specified monitoring target in the bolt face image after the predicting the preset period includes:
s1031, determining the running parameters of the specified monitoring target according to the panoramic video stream, wherein the running parameters comprise a motion track, a speed and an acceleration.
And calculating running parameters of the specified monitoring target according to the movement condition of the specified monitoring target in the panoramic video stream, wherein the running parameters comprise the movement track, the speed and the acceleration of the specified monitoring target.
S1032, predicting the predicted position in the gun camera image after the preset period of the monitoring target according to the running parameters of the specified monitoring target.
For example, when the specified monitoring target is a vehicle, the position of the vehicle, i.e., the predicted position, which is reached in the rifle bolt screen after a preset period of time, is predicted based on the movement locus, speed, and acceleration of the vehicle in combination with the lane information, the lane direction, and the like.
Optionally, the predicting the predicted position in the gun camera image after predicting the preset period of the monitoring target according to the running parameter of the specified monitoring target includes:
s10321, judging according to the motion trail of the appointed monitoring target, if the appointed monitoring target runs straight, predicting the predicted position of the appointed monitoring target in the straight line direction after a preset period of time according to the speed and the acceleration of the appointed monitoring target;
S10322, judging according to the motion trail of the appointed monitoring target, if the appointed monitoring target has a lane change action, predicting the predicted position of the appointed monitoring target in the lane after lane change according to the speed and the acceleration of the appointed monitoring target after a preset period;
s10323, judging according to the speed and the acceleration of the appointed monitoring target, if the appointed monitoring target stops within a preset first distance threshold, and taking the preset second distance in front of the current position of the appointed monitoring target as a predicted position according to the motion track of the appointed monitoring target.
The pre-judgment logic can be summarized as: if the vehicle is always advancing in a certain direction, the vehicle is at a position point in the same direction after a preset period; if the lane changing action of the vehicle is detected, the position of the vehicle on the lane after lane changing after the preset period of time is pre-judged; if the vehicle speed tends to stop, the predicted position is advanced or fixed near the current position of the vehicle for active capture.
Meanwhile, the position correction pre-judgment can be carried out by combining the lane lines, and the position of the vehicle cannot exceed the lane area. Optionally, the multi-camera combined evidence obtaining method of the embodiment of the present application further includes: and correcting the predicted position according to the lane line information.
And S104, determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera.
The spherical camera coordinate system is usually a PTZ (Pan/Tilt/Zoom) coordinate system, and the cloud platform moves left and right/up and down and the lens Zoom and Zoom control. And the association relation between each position in the gun camera image and the PT coordinate of the ball camera can be pre-established in a key point mapping mode. For example, the position of the target point in the camera image is associated with the PT coordinates of the ball camera, and the association between any position in the camera image and the PT coordinates of the ball camera can be calculated by associating a plurality of target points. And one gun camera can be calibrated with a plurality of ball cameras, and an association relation between positions in a gun camera image and PT coordinates of the ball cameras is established.
Optionally, the step of pre-establishing an association relationship between each position in the camera image and the PT coordinate of the ball camera includes:
step one, acquiring GPS (Global Positioning System ) coordinates of the set position of the ball machine frame and the erection height of the ball machine.
And secondly, determining GPS coordinates of an actual position of the position in an actual scene for any position in the gun camera image, and calculating longitude and latitude distances between the dome camera and the actual position according to the GPS coordinates of the actual position and the GPS coordinates of the dome camera.
The GPS coordinates comprise longitude and latitude, the longitude difference between the actual position and the dome camera is the calculated longitude distance, and the latitude difference between the actual position and the dome camera is the calculated latitude distance.
And thirdly, calculating the horizontal distance between the actual position and the ball machine according to the longitude and latitude distance.
The horizontal distance is the distance between the ball machine and the monitoring target under the condition that the ball machine and the monitoring target are the same in height. Referring to fig. 2, in one case, the ground may be considered as a plane, and the horizontal distance of the monitoring target from the ball machine is calculated using the following formula 1. Wherein, the formula is:
alternatively, a haverine function may be used to calculate the horizontal distance between the monitored target and the dome camera, see equation 2:
wherein Aw represents the latitude of the monitoring target, aj represents the longitude of the monitoring target, bw represents the latitude of the ball machine, bj represents the longitude of the ball machine, L represents the horizontal distance between the monitoring target and the ball machine, and R represents the earth radius of the position where the ball machine is located.
Alternatively, the ground may be considered as a sphere, and the horizontal distance between the monitoring target and the spherical machine, that is, the spherical distance is calculated by using a spherical sine and cosine formula. There are various ways of calculating the horizontal distance between the monitoring target and the ball machine, and this is not the case.
And step four, calculating the horizontal included angle between the actual position and the appointed direction through a trigonometric function according to the longitude and latitude distance.
The specified direction can be set according to the actual situation. Optionally, the specified direction is north; calculating the horizontal angle between the actual position and the designated direction by using a trigonometric function according to the longitude and latitude distance, including: calculating the ratio of the longitude distance to the latitude distance as the tangent value of the horizontal included angle; and solving the horizontal included angle by using the tangent value of the horizontal included angle. Referring to fig. 2, tan θ=distance in the warp direction/distance in the weft direction, and θ is the horizontal angle between the monitoring target and the north direction.
Optionally, the specified direction is the positive east; calculating the horizontal angle between the actual position and the designated direction by using a trigonometric function according to the longitude and latitude distance, including: calculating the ratio of the weft direction distance to the warp direction distance as the tangent value of the horizontal included angle; and solving the horizontal included angle through the tangent value. Referring to fig. 2, tan α=distance in the weft direction/distance in the warp direction, and α is the horizontal angle between the monitoring target and the forward direction.
Of course, the designated direction may be the forward direction or the forward direction, and the specific calculation process is similar and will not be repeated.
And fifthly, determining the P coordinate of the ball machine according to the horizontal included angle.
The P coordinate of the ball machine can be understood as the angle of the ball machine in the horizontal direction, and the angle of the ball machine in the horizontal direction can be determined by knowing the horizontal included angle of the ball machine and the appointed direction (such as north, etc.), so that the P coordinate of the ball machine can be obtained.
And step six, calculating the T coordinate of the dome camera according to the horizontal distance and the erection height of the dome camera, thereby obtaining the association relation between any position in the gun camera image and the PT coordinate of the dome camera.
Optionally, the calculating the T coordinate of the ball machine according to the horizontal distance and the height of the ball machine includes: calculating the ratio of the horizontal distance to the height of the ball machine as the tangent value of the T coordinate of the ball machine; and solving the T coordinate of the ball machine by using the tangent value of the T coordinate of the ball machine. Referring to fig. 3, tan=h=l, h represents the height of the ball machine, L represents the horizontal distance between the monitoring target and the ball machine, and T represents the T coordinate of the ball machine. The T coordinate of the ball machine can be obtained through calculation according to the formula.
In practical situations, there may be errors due to problems of GPS accuracy, measurement accuracy, and the like. Optionally, the step of pre-establishing an association relationship between each position in the camera image and the PT coordinate of the ball camera further includes: if the converted coordinates of the actual position in the image coordinate system and the converted coordinates of the actual position in the image have horizontal errors, the electronic compass of the dome camera is adjusted to reduce the horizontal errors. Optionally, the step of pre-establishing an association relationship between each position in the camera image and the PT coordinate of the ball camera further includes: if the converted coordinates of the actual position in the image coordinate system and the converted coordinates of the actual position in the image have vertical errors, the vertical errors are reduced by adjusting the obtained height value of the dome camera.
And S105, acquiring a close-up evidence image at the PT coordinates of the target through the dome camera.
The ball machine in the embodiment of the application can be a conventional ball machine with an image acquisition function only, and also can be an intelligent ball machine with image feature extraction and analysis capabilities.
When the ball machine is a conventional ball machine, optionally, the acquiring, by the ball machine, the close-up evidence image at the PT coordinate of the target includes:
And step one, adjusting the spherical machine to the position of the target PT coordinate, and acquiring a close-up video stream at the position of the target PT coordinate through the spherical machine.
The electronic equipment sends a message containing the target PT coordinate to the ball machine, so that the ball machine transfers to the target PT coordinate after receiving the message, and acquires a close-up video stream of the target PT coordinate position, and the electronic equipment acquires the close-up video stream of the ball machine at the target PT coordinate position.
And step two, analyzing the close-up video stream to obtain a close-up evidence image of the monitoring target in the close-up video stream.
The electronic device analyzes the close-up video stream using computer vision techniques to obtain a close-up evidence image of the close-up video stream that includes the monitored target.
When the ball machine is an intelligent ball machine, optionally, the acquiring, by the ball machine, a close-up evidence image at the target PT coordinate includes:
and firstly, adjusting the ball machine to the position of the target PT coordinate, and triggering the ball machine to start a snap shooting mode so that the ball machine can snap a close-up evidence image comprising the monitoring target.
The electronic equipment sends a message containing the target PT coordinates to the dome camera, so that the dome camera transfers to the target PT coordinates after receiving the message, and collects a close-up video stream of the target PT coordinates, and the dome camera analyzes the close-up video stream by utilizing a computer vision technology, so that a close-up evidence image comprising a monitoring target in the close-up video stream is obtained.
And step two, receiving the close-up evidence image sent by the dome camera.
The electronic equipment acquires a close-up evidence image of the spherical camera at the PT coordinate position of the acquisition target.
In the embodiment of the application, the processing burden of the electronic equipment can be reduced by utilizing the dome camera to analyze the close-up evidence image in the close-up video stream.
When there are a plurality of ball machines that can be linked, in order to improve the capturing efficiency of the designated monitoring target, the adjusting the ball machine to the position of the target PT coordinate may include: and respectively adjusting the ball machines to the target PT coordinates and monitoring positions adjacent to the target PT coordinates.
The multiple ball machines are respectively responsible for monitoring the PT coordinates of the target and the monitoring positions adjacent to the PT coordinates of the target, so that the capturing efficiency of the appointed monitoring target is improved. The monitoring areas of the ball machines can be partially overlapped, so that the detection failure caused by the fact that a specified monitoring target is arranged at the edge of the monitoring area of the ball machine is reduced, the monitoring areas of the ball machines can be not overlapped, and the specific situation can be set according to actual situations. The corresponding relation between each spherical machine and the monitoring position can be randomly divided, and can also be calculated through a related shortest path algorithm, and when the shortest path is calculated by taking the angle of the spherical machine transferred to the corresponding monitoring position as the path, the corresponding monitoring position of each spherical machine is calculated.
In the embodiment of the application, the panoramic evidence image is acquired by using the gun camera, and the close-up evidence image is acquired by using the dome camera, so that the close-up evidence image is acquired while the panoramic evidence image is acquired, the predicted position of the target after the preset period is predicted, the linkage time of the dome camera is ensured, and the success rate of acquiring the close-up evidence image is improved.
The inventors found in the study that the electronic device or the ball machine or the like only performs the acquisition of the close-up evidence image for a certain type of monitoring target, and thus there may be a case where the acquired characteristic evidence image is not a specified monitoring target. For example, when the vehicle a triggers a preset detection event, the detection algorithm in the electronic device or the dome camera extracts a close-up evidence image of the vehicle and extracts a close-up evidence image of the vehicle B, which may cause a situation that the panoramic evidence image and the feature evidence image are wrongly matched, in this regard, optionally, referring to fig. 4, after the close-up evidence image at the target PT coordinate is acquired by the dome camera, the multi-camera combined evidence obtaining method according to the embodiment of the present application further includes:
s106, judging whether the monitoring target in the close-up evidence image and the appointed monitoring target are the same target or not.
The electronic equipment can conduct feature comparison on the monitoring target in the close-up evidence image and the appointed monitoring target through a feature comparison peer-to-peer method, so that whether the monitoring target in the close-up evidence image and the appointed monitoring target are the same target or not is judged.
Optionally, the determining whether the monitored target in the close-up evidence image and the specified monitored target are the same target includes:
s1061, judging whether the position of the monitoring target in the close-up evidence image is matched with the motion trail of the appointed monitoring target in the panoramic video stream according to the shooting time of the close-up evidence image and PT coordinates of a dome camera shooting the close-up evidence image.
When the monitoring target is a vehicle, besides sending the close-up evidence image, the dome camera can also send license plate information of the vehicle, the capture time of the close-up evidence image, the modeling result of the vehicle, PT coordinates when the dome camera captures the close-up evidence image and the like to the electronic equipment. The vehicle modeling result can be applied to subsequent feature matching, and PT coordinates and capture time of the close-up evidence image when the dome camera captures the close-up evidence image can be applied to detection of whether the position and the motion trail of the monitoring target coincide.
The electronic device converts the position of the monitoring target in the close-up evidence image into a position in the bolt face image, hereinafter referred to as a mapped position. And comparing the mapping position with the motion trail of the appointed monitoring target in the panoramic video stream. And judging whether the position of the monitoring target in the close-up evidence image is consistent with the motion trail of the appointed monitoring target in the panoramic video stream or not, namely judging whether the mapping position is consistent with the motion trail of the appointed monitoring target in the panoramic video stream or not.
S1062, if the detected images do not match, determining that the monitored target in the close-up evidence image is not the same target as the specified monitored target.
And S1063, if the features match, performing feature matching on the monitoring target in the close-up evidence image and the appointed monitoring target, and judging that the monitoring target in the close-up evidence image and the appointed monitoring target are the same target when the monitoring target in the close-up evidence image and the appointed monitoring target are the same target in feature matching.
S1064, when the monitored target in the close-up evidence image and the specified monitored target feature are matched to be different targets, judging that the monitored target in the close-up evidence image and the specified monitored target are not the same target.
Because the feature matching consumes more computing resources, in this embodiment, the motion trail is determined, and when the position of the monitoring target in the close-up evidence image matches with the motion trail of the specified monitoring target in the panoramic video stream, the feature matching is performed. The situation that the filtering part is not the same target can be judged through the motion trail, so that the feature matching times are reduced, and the computing resource is saved.
And S107, uploading the panoramic evidence image and the close-up evidence image to an alarm platform when the monitoring target in the close-up evidence image and the appointed monitoring target are the same monitoring target.
And when the monitoring target in the close-up evidence image and the appointed monitoring target are the same monitoring target, uploading the panoramic evidence image and the close-up evidence image to an alarm platform by the electronic equipment. Specifically, the panoramic evidence image and the close-up evidence image can be synthesized, and the synthesized image is uploaded to an alarm platform; of course, the panoramic evidence image and the close-up evidence image are not synthesized, and the panoramic evidence image and the close-up evidence image are all evidence images of the appointed monitoring target through a preset calibration method.
For example, a plurality of panoramic evidence images of the rifle bolt and a close-up evidence image of the ball machine can be combined, and the combination format supports modes such as up-down combination, left-right combination, and Chinese character 'tian' combination. Taking three panoramic evidence images as an example, a composite image in a chevron format is shown in fig. 5.
Besides the evidence image, the electronic device can upload the illegal type, illegal time, license plate information, vehicle characteristic information, scene information and the like of the appointed monitoring target to the alarm platform, so that the alarm platform can perform later operations such as display, retrieval, punishment and the like.
The multi-camera combined evidence obtaining method according to the embodiment of the application is specifically described below when the monitoring target is a vehicle.
Before forensics, the monitoring position of the bolt face and the ball face needs to be calibrated. Through the correlation between the specific position in the gun camera image and the PTZ position of a dome camera, the correlation between any position in the gun camera image and the PTZ of the dome camera can be further deduced through the correlation between a plurality of points; one rifle bolt can be calibrated with a plurality of ball machine devices, and a mapping relation between the position in the rifle bolt image and any ball machine PTZ is established. Setting a preset detection event: the method comprises the steps of monitoring a large scene of a gun camera picture, and adding lane line information, lane directions and the like in the gun camera picture; the preset detection events include, but are not limited to, illegal events such as illegal stop, reverse, line pressing, turning around, machine occupation non-occupation and lane changing.
In the evidence obtaining process, vehicle detection is kept on the panoramic video stream of the gun camera, and whether the vehicle in the picture triggers a preset detection event or not is judged according to the running track of the vehicle and the combination of the configured lane information, the lane direction and the like; if an object triggers an event, the following steps will be performed:
Extracting 1 or more pictures from the process of a preset detection event triggered by the vehicle to obtain a panoramic evidence image, thereby recording the whole illegal process; the linkage ball machine captures a high-definition picture of the vehicle; recording the running track of the target vehicle in the picture, and recording the time of a specific track point in the picture; carrying out target modeling on a target vehicle-to-vehicle connection; the target modeling result is used for performing target comparison, and the target modeling is realized according to the relevant characteristic information such as the type of the vehicle, the color of the vehicle and the like.
In the process of capturing the vehicles by the linked dome camera, the movement position of the vehicles needs to be predicted, when a certain vehicle triggers an event of equipment, the position of the vehicles, which is reached in a gun camera picture after a certain time, is predicted according to the running track and speed of the gun camera and by combining the configured lane information, the lane direction and the like, so as to become a target position; the pre-judgment logic comprises the following steps:
if the target always advances in a certain direction, determining that the target is a position point in the same direction after a period of time; if the target is detected to be in the lane changing action, the position of the target after a period of time is prejudged to be the front position of the lane after lane changing; position correction pre-judgment is carried out by combining the lane lines, and the pre-judgment position of the vehicle does not exceed the lane area; if the target speed tends to stop, actively advancing the pre-judging position or directly positioning the pre-grinding position near the target vehicle to perform active capturing; the linkage ball machine snap shooting determines to link one or more ball machines to the vicinity of the corresponding positions according to the target positions, and the ball machines are zoomed to a proper magnification for snap shooting; the transverse section detected when a plurality of ball machines are linked is wider, so that the success rate of capturing vehicles by the ball machines can be improved; before the vehicle reaches the target position, the gun camera can continuously correct the target position according to the latest track of the target, and the ball camera is linked again, so that the latest position of the ball camera is ensured to be accurate.
Capturing a close-up evidence image through a ball machine: after the ball machine is linked, the ball machine enters a vehicle snapshot mode, and after the ball machine enters the vehicle snapshot mode, the past vehicle can be snapshot, and license plate information of the vehicle can be analyzed; the alarm information of each vehicle during the snapshot is sent to the electronic equipment, and the alarm information can comprise, besides the vehicle picture (characteristic evidence image): license plate information, snapshot time, vehicle modeling results, current PTZ position of the dome camera and the like.
And associating the panoramic evidence image and the close-up evidence image. The electronic equipment receives alarm information returned by the ball machine, compares the alarm information with the PTZ position of the ball machine according to the snapshot time in the alarm information and the track node of the target vehicle in the gun camera image, and discards the alarm if the alarm information is not matched with the track node of the target vehicle in the gun camera image; and if the modeling results are consistent, comparing the vehicle modeling results of the ball machine with the vehicle modeling results of the gun camera image, and if the modeling results are consistent, considering that the vehicle captured by the ball machine and the target vehicle captured by the gun camera are the same vehicle.
And synthesizing the panoramic evidence image and the close-up evidence image. When uploading the image, a plurality of panoramic evidence images and a close-up evidence image of the dome camera can be synthesized, and the synthesis format supports modes such as up-down synthesis, left-right synthesis, chinese character 'tian' shape synthesis and the like. Referring to fig. 5, taking 3 panoramic evidence images as an example, finally, 4 images are synthesized into a font format, and of course, the panoramic evidence image and the close-up evidence image may not be synthesized. And combining the alarm picture, the illegal type, the illegal time, the license plate information, the vehicle characteristic information, the scene information and the like into one alarm information to be sent to an alarm platform, so that the alarm platform can perform later operations such as display, retrieval, punishment and the like.
In the embodiment of the application, the linkage ball machine mode is used, so that the problem that the vehicle cannot obtain evidence after turning around and stopping is solved; by using the mode of the linkage ball machine, only one ball machine can be used for matching with the gun machine for linkage, and the whole large scene is taken into account for evidence; the pre-judging mode is used, so that the linked ball machine is more accurate, and the success rate of capturing the target vehicle by the ball machine is improved; track matching and modeling matching are used, so that the accuracy of successful target matching among multiple devices is improved; the mode of linking the multi-billiard machine can be matched with a gun machine to cover a wider area for evidence collection, and the success probability of capturing a target vehicle can be improved.
The embodiment of the application also provides a multi-camera combined evidence obtaining system, referring to fig. 6, the system comprises: bolt 601 and ball machine 602; the number of the ball machines 602 may be one or more. The bolt 601 performs the following steps in operation:
acquiring a panoramic video stream acquired by a gun camera, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event;
if the monitoring target triggers the preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame triggers the preset detection event, of a designated monitoring target to obtain a panoramic evidence image, and the designated monitoring target is a monitoring target triggering the preset detection event;
Predicting the predicted position of the specified monitoring target in the gun camera image after a preset period;
determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera;
and acquiring a close-up evidence image at the PT coordinate of the target through the dome camera.
Optionally, the bolt 601 can also implement any of the multiple camera combination forensics methods described above during operation.
The embodiment of the application also provides a multi-camera combined evidence obtaining system, referring to fig. 7, the system comprises: server 701, rifle bolt 702, and ball machine 703; the number of the ball machines 703 may be one or more.
The server 701 implements the following steps in operation:
acquiring a panoramic video stream acquired by a gun camera, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event;
if the monitoring target triggers the preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame triggers the preset detection event, of a designated monitoring target to obtain a panoramic evidence image, and the designated monitoring target is a monitoring target triggering the preset detection event;
Predicting the predicted position of the specified monitoring target in the gun camera image after a preset period;
determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera;
and acquiring a close-up evidence image at the PT coordinate of the target through the dome camera.
Optionally, the server 701 may also implement any of the multiple-camera joint forensic methods described above.
The embodiment of the application also provides a multi-camera combined evidence obtaining device, referring to fig. 8, the device comprises:
the video stream detection module 801 is configured to obtain a panoramic video stream collected by a rifle bolt, and determine whether a monitoring target exists in the panoramic video stream to trigger a preset detection event;
the panorama evidence obtaining module 802 is configured to obtain a panorama evidence image by obtaining a video frame of the panorama video stream, where the video frame triggers the preset detection event, if the preset detection event is triggered by a monitoring target, and the designated monitoring target is the monitoring target that triggers the preset detection event;
a position prediction module 803, configured to predict a predicted position of the specified monitoring target in the gun camera image after a preset period;
The target coordinate determining module 804 is configured to determine a target PT coordinate of the dome camera corresponding to the predicted position according to an association relationship between each position in a pre-established gun camera image and the PT coordinate of the dome camera;
and a close-up evidence obtaining module 805, configured to obtain a close-up evidence image at the target PT coordinate through the dome camera.
Optionally, the multi-camera combined evidence obtaining device of the embodiment of the present application further includes: the coordinate association relation building module; the coordinate association relation establishing module comprises:
the erection parameter obtaining sub-module is used for obtaining GPS coordinates of the set position of the ball machine frame and the erection height of the ball machine;
the longitude and latitude distance calculation sub-module is used for determining the GPS coordinates of the actual position of the position in the actual scene for any position in the gun camera image, and calculating the longitude and latitude distance between the dome camera and the actual position according to the GPS coordinates of the actual position and the GPS coordinates of the dome camera;
the horizontal distance calculating sub-module is used for calculating the horizontal distance between the actual position and the dome camera according to the longitude and latitude distance;
the horizontal included angle calculation sub-module is used for calculating the horizontal included angle between the actual position and the appointed direction through a trigonometric function according to the longitude and latitude distance;
The P coordinate determination submodule is used for determining the P coordinate of the ball machine according to the horizontal included angle;
and the T coordinate determination submodule is used for calculating the T coordinate of the ball machine according to the horizontal distance and the erection height of the ball machine so as to obtain the association relation between any position in the gun camera image and the PT coordinate of the ball machine.
Optionally, the panoramic video stream includes lane line information and lane direction information, the specified monitoring target is a vehicle, and the preset detection event includes: one or more of stopping, reversing, pressing lines, turning around and the motor vehicle occupies a non-motor vehicle lane and changing the lane in a violation way.
Optionally, the location prediction module 803 includes:
the running parameter determining submodule is used for determining the running parameters of the specified monitoring target according to the panoramic video stream, wherein the running parameters comprise a motion track, a speed and an acceleration;
and the predicted position determining sub-module is used for predicting the predicted position in the gun camera image after the preset period of the monitoring target according to the running parameters of the specified monitoring target.
Optionally, the prediction position determining sub-module includes:
the first position prediction unit is used for judging according to the motion trail of the specified monitoring target, if the specified monitoring target runs straight, predicting the predicted position of the specified monitoring target in the straight line direction after a preset period of time according to the speed and the acceleration of the specified monitoring target;
The second position prediction unit is used for judging according to the motion trail of the specified monitoring target, and predicting the predicted position of the specified monitoring target in the lane after lane change according to the speed and the acceleration of the specified monitoring target if the specified monitoring target has lane change action;
and the third position prediction unit is used for judging according to the speed and the acceleration of the specified monitoring target, and if the specified monitoring target stops within a preset first distance threshold value, taking a preset second distance in front of the current position of the specified monitoring target as a predicted position according to the motion track of the specified monitoring target.
Optionally, the close-up evidence obtaining module 805 includes:
the spherical machine position adjusting sub-module is used for adjusting the spherical machine to the position of the target PT coordinate;
the close-up video stream acquisition sub-module is used for acquiring the close-up video stream at the target PT coordinate position through the dome camera;
and the close-up video stream analysis sub-module is used for analyzing the close-up video stream and acquiring a close-up evidence image comprising a monitoring target in the close-up video stream.
Optionally, the close-up evidence obtaining module 805 includes:
The spherical machine position adjusting sub-module is used for adjusting the spherical machine to the position of the target PT coordinate;
the shooting mode triggering sub-module is used for triggering the ball machine to start a shooting mode so as to enable the ball machine to shoot a close-up evidence image comprising a monitoring target;
and the close-up evidence image receiving sub-module is used for receiving the close-up evidence image sent by the ball machine.
Optionally, the above-mentioned ball machine position adjustment submodule is specifically configured to: and respectively adjusting the ball machines to the target PT coordinates and monitoring positions adjacent to the target PT coordinates.
Optionally, the multi-camera combined evidence obtaining device according to the embodiment of the present application further includes:
the same target judging module is used for judging whether the monitoring target in the close-up evidence image and the appointed monitoring target are the same target or not;
and the evidence image uploading module is used for uploading the panoramic evidence image and the close-up evidence image to the alarm platform when the monitoring target in the close-up evidence image and the appointed monitoring target are the same monitoring target.
Optionally, the same target determining module includes:
the motion trail judging sub-module is used for judging whether the position of the monitoring target in the close-up evidence image is consistent with the motion trail of the appointed monitoring target in the panoramic video stream or not according to the shooting time of the close-up evidence image and the PT coordinates of the dome camera shooting the close-up evidence image;
The first judging submodule is used for judging that the monitoring target in the close-up evidence image and the appointed monitoring target are not the same target if the monitoring target and the appointed monitoring target are not identical;
the feature configuration sub-module is used for carrying out feature matching on the monitoring target in the close-up evidence image and the appointed monitoring target if the features are matched, and judging that the monitoring target in the close-up evidence image and the appointed monitoring target are the same target when the monitoring target in the close-up evidence image and the appointed monitoring target are the same target;
and the second judging sub-module is used for judging that the monitoring target in the close-up evidence image and the appointed monitoring target are not the same target when the monitoring target in the close-up evidence image and the appointed monitoring target are matched to be different targets.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, system, electronic device, and storage medium, the description is relatively simple as it is substantially similar to the method embodiments, with reference to the description of the method embodiments as relevant.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (11)
1. A multi-camera joint forensic method, the method comprising:
acquiring a panoramic video stream acquired by a gun camera, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event;
if the monitoring target triggers the preset detection event, acquiring a video frame of the panoramic video stream, wherein the video frame triggers the preset detection event, of a designated monitoring target to obtain a panoramic evidence image, and the designated monitoring target is a monitoring target triggering the preset detection event;
Determining running parameters of the specified monitoring target according to the panoramic video stream, wherein the running parameters comprise a motion track, a speed and an acceleration; judging according to the motion trail of the appointed monitoring target, if the appointed monitoring target runs straight, predicting the predicted position of the appointed monitoring target in the straight line direction after a preset period of time according to the speed and the acceleration of the appointed monitoring target; judging according to the motion trail of the appointed monitoring target, if the appointed monitoring target has a lane change action, predicting the predicted position of the appointed monitoring target in a lane after lane change according to the speed and the acceleration of the appointed monitoring target after a preset period of time; judging according to the speed and the acceleration of the appointed monitoring target, if the appointed monitoring target stops within a preset first distance threshold, and taking a preset second distance in front of the current position of the appointed monitoring target as a predicted position according to the movement track of the appointed monitoring target;
determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera;
And acquiring a close-up evidence image at the PT coordinate of the target through the dome camera.
2. The method of claim 1, wherein the step of pre-establishing an association of each position in the bolt face image with the PT coordinates of the ball machine comprises:
acquiring GPS coordinates of the set position of the ball machine frame and the erection height of the ball machine;
determining GPS coordinates of an actual position of the position in an actual scene according to any position in the gun camera image, and calculating longitude and latitude distances between the dome camera and the actual position according to the GPS coordinates of the actual position and the GPS coordinates of the dome camera;
calculating the horizontal distance between the actual position and the ball machine according to the longitude and latitude distance;
calculating the horizontal included angle between the actual position and the appointed direction through a trigonometric function according to the longitude and latitude distance;
according to the horizontal included angle, determining the P coordinate of the dome camera;
and calculating the T coordinate of the dome camera according to the horizontal distance and the erection height of the dome camera, so as to obtain the association relation between any position in the gun camera image and the PT coordinate of the dome camera.
3. The method of claim 1, wherein the panoramic video stream includes lane information and lane direction information, the specified monitoring target is a vehicle, and the preset detection event includes: one or more of stopping, reversing, pressing lines, turning around and the motor vehicle occupies a non-motor vehicle lane and changing the lane in a violation way.
4. The method of claim 1, wherein the acquiring, by the dome camera, a close-up evidence image at the target PT coordinates comprises:
adjusting the spherical machine to the position of the target PT coordinate, and acquiring a close-up video stream at the position of the target PT coordinate through the spherical machine;
and analyzing the close-up video stream to obtain a close-up evidence image including the monitoring target in the close-up video stream.
5. The method of claim 1, wherein the acquiring, by the dome camera, a close-up evidence image at the target PT coordinates comprises:
adjusting the spherical machine to the position of the target PT coordinate, and triggering the spherical machine to start a snap shooting mode so that the spherical machine can snap a close-up evidence image comprising a monitoring target;
and receiving the close-up evidence image sent by the ball machine.
6. The method of claim 4 or 5, wherein said adjusting the ball machine to the location of the target PT coordinates comprises:
and respectively adjusting the ball machines to the target PT coordinates and monitoring positions adjacent to the target PT coordinates.
7. The method of claim 1, wherein after the acquiring of the close-up evidence image at the target PT coordinates by the dome camera, the method further comprises:
Judging whether a monitoring target in the close-up evidence image and the appointed monitoring target are the same target or not;
and uploading the panoramic evidence image and the close-up evidence image to an alarm platform when the monitoring target in the close-up evidence image and the appointed monitoring target are the same monitoring target.
8. The method of claim 7, wherein the determining whether the monitored target in the close-up evidence image and the specified monitored target are the same target comprises:
judging whether the position of a monitoring target in the close-up evidence image is consistent with the motion trail of the appointed monitoring target in the panoramic video stream or not according to the shooting time of the close-up evidence image and the PT coordinates of a dome camera shooting the close-up evidence image;
if the specified monitoring targets are not identical, judging that the monitoring targets in the close-up evidence image are not identical to the specified monitoring targets;
if yes, performing feature matching on the monitoring target in the close-up evidence image and the appointed monitoring target, and judging that the monitoring target in the close-up evidence image and the appointed monitoring target are the same target when the monitoring target in the close-up evidence image and the appointed monitoring target are the same target in feature matching;
And when the monitoring target in the close-up evidence image and the appointed monitoring target are matched to be different targets, judging that the monitoring target in the close-up evidence image and the appointed monitoring target are not the same target.
9. A multiple camera combination forensic device, the device comprising:
the video stream detection module is used for acquiring a panoramic video stream acquired by a gun camera and judging whether a monitoring target in the panoramic video stream triggers a preset detection event;
the panoramic evidence acquisition module is used for acquiring a video frame of the panoramic video stream, which is used for triggering the preset detection event, by a specified monitoring target if the monitoring target triggers the preset detection event, so as to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target for triggering the preset detection event;
the position prediction module is used for determining the running parameters of the specified monitoring target according to the panoramic video stream, wherein the running parameters comprise a motion track, a speed and an acceleration; judging according to the motion trail of the appointed monitoring target, if the appointed monitoring target runs straight, predicting the predicted position of the appointed monitoring target in the straight line direction after a preset period of time according to the speed and the acceleration of the appointed monitoring target; judging according to the motion trail of the appointed monitoring target, if the appointed monitoring target has a lane change action, predicting the predicted position of the appointed monitoring target in a lane after lane change according to the speed and the acceleration of the appointed monitoring target after a preset period of time; judging according to the speed and the acceleration of the appointed monitoring target, if the appointed monitoring target stops within a preset first distance threshold, and taking a preset second distance in front of the current position of the appointed monitoring target as a predicted position according to the movement track of the appointed monitoring target;
The target coordinate determining module is used for determining target PT coordinates of the dome camera corresponding to the predicted position according to the association relation between each position in the pre-established gun camera image and the PT coordinates of the dome camera;
and the close-up evidence acquisition module is used for acquiring a close-up evidence image at the PT coordinate of the target through the dome camera.
10. A multi-camera federated evidence obtaining system, the system comprising: gun camera and ball camera; the bolt works, when in operation, implement the method steps of any one of claims 1-8.
11. A multi-camera federated evidence obtaining system, the system comprising: server, rifle bolt and ball machine; the server, when running, implements the method steps of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910380772.5A CN111914592B (en) | 2019-05-08 | 2019-05-08 | Multi-camera combined evidence obtaining method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910380772.5A CN111914592B (en) | 2019-05-08 | 2019-05-08 | Multi-camera combined evidence obtaining method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111914592A CN111914592A (en) | 2020-11-10 |
CN111914592B true CN111914592B (en) | 2023-09-05 |
Family
ID=73242545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910380772.5A Active CN111914592B (en) | 2019-05-08 | 2019-05-08 | Multi-camera combined evidence obtaining method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111914592B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112911231B (en) * | 2021-01-22 | 2023-03-07 | 杭州海康威视数字技术股份有限公司 | Linkage method and system of monitoring cameras |
CN113536961A (en) * | 2021-06-24 | 2021-10-22 | 浙江大华技术股份有限公司 | Suspended particulate matter detection method and device and gun-ball linkage monitoring system |
CN113591651A (en) * | 2021-07-22 | 2021-11-02 | 浙江大华技术股份有限公司 | Image capturing method, image display method and device and storage medium |
CN114677841B (en) * | 2022-02-10 | 2023-12-29 | 浙江大华技术股份有限公司 | Vehicle lane change detection method and vehicle lane change detection system |
CN115760910A (en) * | 2022-10-20 | 2023-03-07 | 浙江大华技术股份有限公司 | Target tracking method and device of gun and ball linkage equipment, terminal and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103716594A (en) * | 2014-01-08 | 2014-04-09 | 深圳英飞拓科技股份有限公司 | Panorama splicing linkage method and device based on moving target detecting |
CN105072414A (en) * | 2015-08-19 | 2015-11-18 | 浙江宇视科技有限公司 | Method and system for detecting and tracking target |
CN105979210A (en) * | 2016-06-06 | 2016-09-28 | 深圳市深网视界科技有限公司 | Pedestrian identification system based on multi-ball multi-gun camera array |
CN108447091A (en) * | 2018-03-27 | 2018-08-24 | 北京颂泽科技有限公司 | Object localization method, device, electronic equipment and storage medium |
CN109309809A (en) * | 2017-07-28 | 2019-02-05 | 阿里巴巴集团控股有限公司 | The method and data processing method, device and system of trans-regional target trajectory tracking |
CN109584309A (en) * | 2018-11-16 | 2019-04-05 | 厦门博聪信息技术有限公司 | A kind of twin-lens emergency cloth ball-handling of rifle ball linkage |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6970102B2 (en) * | 2003-05-05 | 2005-11-29 | Transol Pty Ltd | Traffic violation detection, recording and evidence processing system |
US7742077B2 (en) * | 2004-02-19 | 2010-06-22 | Robert Bosch Gmbh | Image stabilization system and method for a video camera |
-
2019
- 2019-05-08 CN CN201910380772.5A patent/CN111914592B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103716594A (en) * | 2014-01-08 | 2014-04-09 | 深圳英飞拓科技股份有限公司 | Panorama splicing linkage method and device based on moving target detecting |
CN105072414A (en) * | 2015-08-19 | 2015-11-18 | 浙江宇视科技有限公司 | Method and system for detecting and tracking target |
CN105979210A (en) * | 2016-06-06 | 2016-09-28 | 深圳市深网视界科技有限公司 | Pedestrian identification system based on multi-ball multi-gun camera array |
CN109309809A (en) * | 2017-07-28 | 2019-02-05 | 阿里巴巴集团控股有限公司 | The method and data processing method, device and system of trans-regional target trajectory tracking |
CN108447091A (en) * | 2018-03-27 | 2018-08-24 | 北京颂泽科技有限公司 | Object localization method, device, electronic equipment and storage medium |
CN109584309A (en) * | 2018-11-16 | 2019-04-05 | 厦门博聪信息技术有限公司 | A kind of twin-lens emergency cloth ball-handling of rifle ball linkage |
Also Published As
Publication number | Publication date |
---|---|
CN111914592A (en) | 2020-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914592B (en) | Multi-camera combined evidence obtaining method, device and system | |
KR101647370B1 (en) | road traffic information management system for g using camera and radar | |
US10229588B2 (en) | Method and device for obtaining evidences for illegal parking of a vehicle | |
US8098290B2 (en) | Multiple camera system for obtaining high resolution images of objects | |
KR101496390B1 (en) | System for Vehicle Number Detection | |
CN106571039A (en) | Automatic snapshot system for highway traffic offence | |
KR101032495B1 (en) | Multi-function detecting system of illegally parked vehicles using digital ptz technique and method of detecting thereof | |
KR101678004B1 (en) | node-link based camera network monitoring system and method of monitoring the same | |
CN113676702B (en) | Video stream-based target tracking and monitoring method, system, device and storage medium | |
CN106503622A (en) | A kind of vehicle antitracking method and device | |
WO2020211593A1 (en) | Digital reconstruction method, apparatus, and system for traffic road | |
KR102061264B1 (en) | Unexpected incident detecting system using vehicle position information based on C-ITS | |
KR102434154B1 (en) | Method for tracking multi target in traffic image-monitoring-system | |
KR100820952B1 (en) | Detecting method at automatic police enforcement system of illegal-stopping and parking vehicle using single camera and system thereof | |
KR101832274B1 (en) | System for crime prevention of intelligent type by video photographing and method for acting thereof | |
CN111275957A (en) | Traffic accident information acquisition method, system and camera | |
KR101033237B1 (en) | Multi-function detecting system for vehicles and security using 360 deg. wide image and method of detecting thereof | |
CN111696365A (en) | Vehicle tracking system | |
KR102107957B1 (en) | Cctv monitoring system for detecting the invasion in the exterior wall of building and method thereof | |
CN111290001A (en) | Target overall planning method, device and equipment based on GPS coordinates | |
CN201142737Y (en) | Front end monitoring apparatus for IP network video monitoring system | |
KR20150019230A (en) | Method and apparatus for tracking object using multiple camera | |
CN103595958A (en) | Video tracking analysis method and system | |
KR20110077465A (en) | The apparatus and method of moving object tracking with shadow removal moudule in camera position and time | |
KR101069766B1 (en) | The enforcement system of illegal parking vehicle connected with closed circuit television for criminal prevention and method of detecting the illegal parking vehicle using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |