CN111597954A - Method and system for identifying vehicle position in monitoring video - Google Patents

Method and system for identifying vehicle position in monitoring video Download PDF

Info

Publication number
CN111597954A
CN111597954A CN202010396990.0A CN202010396990A CN111597954A CN 111597954 A CN111597954 A CN 111597954A CN 202010396990 A CN202010396990 A CN 202010396990A CN 111597954 A CN111597954 A CN 111597954A
Authority
CN
China
Prior art keywords
image
vehicle
control points
monitoring picture
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010396990.0A
Other languages
Chinese (zh)
Inventor
董炜
颜敏骏
杜高丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bokang Yunxin Science & Technology Co ltd
Original Assignee
Bokang Yunxin Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bokang Yunxin Science & Technology Co ltd filed Critical Bokang Yunxin Science & Technology Co ltd
Priority to CN202010396990.0A priority Critical patent/CN111597954A/en
Publication of CN111597954A publication Critical patent/CN111597954A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention relates to the technical field of vehicle position identification, and provides a method and a system for identifying a vehicle position in a monitoring video. The method comprises the following steps: s1: presetting a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquiring a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture aiming at each control point; s2: establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture; s3: and receiving the GPS coordinates of the vehicle, acquiring the two-dimensional coordinates corresponding to the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates. The position of the vehicle can be identified in the monitoring picture, so that an airport manager can clearly find the position of the vehicle in the monitoring picture directly by naked eyes.

Description

Method and system for identifying vehicle position in monitoring video
Technical Field
The invention relates to the technical field of vehicle position identification, in particular to a method and a system for identifying a vehicle position in a monitoring video.
Background
In recent years, the development of domestic aviation industry is strong, the annual speed of aviation passengers is increased by 14% in 10 years, the quantity of civil aviation passengers reaches 6 hundred million people in 2018 years, and the pressure of safe production of airports is increased more and more. In recent years, vehicle intrusion accidents occur many times at home and abroad, and the vehicle enters a flying area or a parking area by mistake.
The vehicle management system of the airport can obtain the GPS coordinate information of the vehicle in real time by installing a GPS device for the vehicle. When the vehicle enters a flight area or a halt area in a violation mode, the vehicle management system can send alarm information to prompt airport management personnel.
In the video monitoring system of the airport, airport management personnel can obtain monitoring pictures of areas such as a flight area, a parking area and the like in real time. Because the areas of the flight area and the shutdown area are wide, the pixels of the common vehicles are small in the monitoring picture, and the vehicles cannot be directly identified through the image identification technology.
When a vehicle enters a flight area or a halt area in a violation mode, although a manager obtains the alarm information prompt of a vehicle management system, the manager is difficult to find the vehicle in a monitoring picture by naked eyes, and serious potential safety hazards are caused.
How to identify the position of the vehicle in the monitoring picture is an urgent problem to be solved.
Disclosure of Invention
In view of the foregoing problems, an object of the present invention is to provide a method and a system for identifying a vehicle position in a surveillance video, which can identify the vehicle position in a surveillance image, so that an airport manager can directly find the vehicle position clearly in the surveillance image by naked eyes.
The above object of the present invention is achieved by the following technical solutions:
a method of identifying a vehicle location in surveillance video, comprising the steps of:
s1: presetting a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquiring a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture aiming at each control point;
s2: establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture;
s3: and receiving the GPS coordinates of the vehicle, acquiring the two-dimensional coordinates corresponding to the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates.
Further, in step S1, two-dimensional coordinates of the control point in the monitoring screen are acquired, specifically:
collecting images of the control points, generating image models of the control points through an image training method including an open source algorithm based on image recognition and positioning of a deep neural network, and establishing an image model library;
and identifying the control point in the monitoring picture according to the image model in the image model library, and acquiring the two-dimensional coordinate of the control point in the monitoring picture.
Further, the specific steps of establishing the image model library are as follows:
collecting images of each control point, wherein the images comprise the images under different conditions of different weather, different light, different angles and different distances;
for the collected image, preprocessing including horizontal and vertical turning, random cutting, random angle rotation and changing of image contrast and brightness is carried out on the image;
labeling the category and the position of the control point corresponding to the image aiming at the collected image to form labeled data matched with the image, establishing a training file library simultaneously, and placing the standard data in the training file library;
and establishing a configuration file required by training the image, and training the image through an open source algorithm including a YOLO algorithm and based on image recognition and positioning of a deep neural network to generate the image model library.
Further, after generating the image model library, the method further includes:
when all the control points are identified without errors, calculating the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture as the relative position and angle standard between the control points;
and when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly.
Further, in step S3, identifying the position of the vehicle according to the two-dimensional coordinates further includes:
and performing image processing on the monitoring video, and identifying the position of the vehicle in the monitoring picture on the corresponding two-dimensional coordinates of the vehicle in the monitoring picture in a mode including a block diagram.
In order to execute the method, the invention also provides a system for identifying the vehicle position in the monitoring video, which comprises a control point presetting module, a mapping relation establishing module and a vehicle position identifying module;
the control point presetting module is used for presetting a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquiring a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture aiming at each control point;
the mapping relation establishing module is used for establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture;
the vehicle position identification module is used for receiving the GPS coordinates of the vehicle, acquiring the corresponding two-dimensional coordinates of the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates.
Furthermore, the control point presetting module also comprises a GPS coordinate acquisition submodule and a two-dimensional coordinate acquisition submodule;
the GPS coordinate acquisition submodule is used for acquiring the GPS coordinate corresponding to the control point;
the two-dimensional coordinate obtaining submodule is configured to obtain the two-dimensional coordinate of the control point in the monitoring picture, and specifically includes: collecting images of the control points, generating image models of the control points through an image training method including an open source algorithm based on image recognition and positioning of a deep neural network, and establishing an image model library; and identifying the control point in the monitoring picture according to the image model in the image model library, and acquiring the two-dimensional coordinate of the control point in the monitoring picture.
Further, the two-dimensional coordinate acquisition sub-module further includes:
an image model library establishing unit, configured to establish the image model library, specifically: collecting images of each control point, wherein the images comprise the images under different conditions of different weather, different light, different angles and different distances; for the collected image, preprocessing including horizontal and vertical turning, random cutting, random angle rotation and changing of image contrast and brightness is carried out on the image; labeling the category and the position of the control point corresponding to the image aiming at the collected image to form labeled data matched with the image, establishing a training file library simultaneously, and placing the standard data in the training file library; and establishing a configuration file required by training the image, and training the image through an open source algorithm including a YOLO algorithm and based on image recognition and positioning of a deep neural network to generate the image model library.
The relative position and angle calculating unit calculates the relative position and angle standard, and specifically comprises: when all the control points are identified without errors, calculating the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture as the relative position and angle standard between the control points; and when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly.
Further, the vehicle position identification module further includes:
and the image processing submodule is used for carrying out image processing on the monitoring video and identifying the position of the vehicle in the monitoring picture on the corresponding two-dimensional coordinates of the vehicle in the monitoring picture in a mode including a block diagram.
Compared with the prior art, the invention has at least one of the following beneficial effects:
(1) the method for identifying the vehicle position in the surveillance video is established, and specifically comprises the following steps: presetting a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquiring a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture aiming at each control point; establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture; and receiving the GPS coordinates of the vehicle, acquiring the two-dimensional coordinates corresponding to the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates. The position of the vehicle can be identified in the monitoring picture, so that an airport manager can clearly find the position of the vehicle in the monitoring picture directly by naked eyes.
(2) After the image model library is generated and all the control points are identified without errors, the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture are calculated to serve as the relative position and angle standard between the control points. And when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly. Whether the control points are accurately identified can be quickly judged, and the accuracy of subsequent vehicle position identification is improved.
(3) And identifying the position of the vehicle in the monitoring picture on the corresponding two-dimensional coordinates of the vehicle in the monitoring picture by performing image processing on the monitoring video in a mode including a block diagram. The vehicle can be displayed more clearly in the monitoring picture, and the position of the vehicle can be seen clearly.
Drawings
FIG. 1 is an overall flow chart of a method of identifying vehicle location in surveillance video in accordance with the present invention;
FIG. 2 is an overall block diagram of a system for identifying vehicle location in surveillance video in accordance with the present invention;
FIG. 3 is a block diagram of a control point preset module in a system for identifying vehicle location in surveillance video in accordance with the present invention;
fig. 4 is a block diagram of a vehicle location identification module in a system for identifying a location of a vehicle in a surveillance video according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the current vehicle management system, especially in a place such as an airport where vehicle management is more strict, when a vehicle enters a flight area or a stop area by mistake, how quickly an airport manager can know the condition of the vehicle entering by mistake is important to process the vehicle in the first time.
In the existing vehicle management system, GPS coordinate information of a vehicle can be obtained in real time by installing a GPS device to the vehicle. When the vehicle enters a flight area or a halt area in an illegal way, the vehicle management system can send alarm information to prompt airport management personnel that the vehicle breaks into the airport in an illegal way. Because the areas of the flight area and the shutdown area are wide, airport managers generally cannot directly enter the flight area and the shutdown area to search and process vehicles entering by mistake, and at the moment, the positions of the vehicles need to be checked at the first time through monitoring pictures of a video monitoring system of the airport, and related personnel and airplanes are informed to take countermeasures in time so as to avoid accidents.
Due to the wide area of the flight area and the shutdown area, vehicles will be very small in the monitoring picture, and especially under the conditions of bad weather and bad night, airport managers are difficult to accurately find the corresponding vehicles in the monitoring picture through naked eyes, so that serious potential safety hazards can be caused.
The core thought of the invention is as follows: the method comprises the steps of presetting a plurality of control points in a monitoring area corresponding to a monitoring video, simultaneously obtaining a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture, establishing a corresponding coordinate mapping relation according to the two-dimensional coordinate and the GPS coordinate of the control points in a unified monitoring picture, obtaining the corresponding two-dimensional coordinate of a vehicle in the monitoring picture according to the coordinate mapping relation after receiving the GPS coordinate of the vehicle, and identifying the position of the vehicle which enters the monitoring picture by mistake according to the two-dimensional coordinate.
First embodiment
Fig. 1 is a detailed flowchart of a method for identifying a vehicle position in a surveillance video according to the present invention. It includes:
s1: in a monitoring area of a monitoring video, a plurality of control points are preset, and for each control point, a GPS coordinate of the control point and a two-dimensional coordinate of the control point in a monitoring picture are acquired simultaneously.
(1) Presetting of control points
A plurality of control points are preset, and in order to establish a two-dimensional coordinate system, the selected control points may be preferably control points distributed around the monitoring picture. In particular, for a dome camera, it is ensured that there are a suitable number of monitoring points at each monitoring angle, otherwise it is not possible to establish a two-dimensional coordinate system in the monitoring video.
The invention has no requirement on the appearance of the control point, and only needs to be identified by the monitoring video. However, in order to facilitate video recognition, in practical applications, a plurality of objects with special shapes can be selected as control points. For example, towers, terminal buildings or other specially shaped structures may be selected as control points. By using an object with a particular shape as a control point, it will be easier and more accurate when subsequently identifying the control point. The preferred embodiment of the control point selection is only exemplified here, and is not used to limit the protection scope of the control point selection of the present invention.
(2) GPS coordinate acquisition of control points
The GPS coordinates of the control points can be obtained by placing the device provided with the GPS equipment near the control points through any device provided with the GPS equipment, and reading the GPS coordinates of the control points.
Preferably, for example, the vehicle may be directly parked near the control point, and the GPS coordinates of the vehicle, that is, the coordinates may be used as the control point, may be directly acquired by the vehicle management system, so that the consistency of the GPS coordinate data may be completely ensured. Of course, any other method for acquiring GPS coordinates may be applied to the present invention, and the above example is only an optimal method, and is not described herein again for other schemes.
(3) Two-dimensional coordinate acquisition of control points
Acquiring a two-dimensional coordinate of the control point in a monitoring picture, specifically:
(31) and acquiring images of the control points, generating image models of the control points by an image training method including an open source algorithm based on image recognition and positioning of a deep neural network, and establishing an image model library.
The graphic model of the control point is generated by an image training method including an open source algorithm of image recognition and positioning based on a deep neural network, and in this embodiment, yolo (young Only Look one) is preferably used.
YOLO is an open source algorithm for image recognition and localization based on deep neural networks, which has been developed to version v 3. The algorithm has the greatest characteristic of high running speed, can be used for a real-time system, and provides a ready network structure and a training method. The image can be conveniently deeply trained through the algorithm, and an image model of the control point is obtained. Based on the image model library, the control point can be rapidly identified in the real-time video, and the two-dimensional coordinate of the control point in the image is obtained.
In the monitoring picture, the airport buildings such as a tower, an airport terminal and the like can be used as control points. Although YOLOv3 also provides a graphic model library for various objects, the graphic model library must be generated by itself for various airport buildings, and the appearance of the airport buildings is special.
And establishing an accurate graphic model for each control point and establishing an image model library taking an airport as a main object by image training of the appearance pictures of the control points.
The method for establishing the image model library through the YOLO v3 algorithm specifically comprises the following steps:
A) and collecting images of each control point, wherein the images comprise the images under different conditions of different weather, different light rays, different angles and different distances.
Through the high definition camera, shoot the image of each control point, including the image under the different conditions including different weather, different light, different angles, different distances. Especially, when a ball camera is used for shooting, images with different angles and different distances need to be collected as far as possible.
By using different images acquired under different conditions, the accuracy can be improved when an image model library is subsequently established.
B) And for the acquired image, preprocessing including horizontal and vertical turning, random cutting, random angle rotation and changing of image contrast and brightness is carried out on the image.
C) And labeling the category and the position of the control point corresponding to the image aiming at the acquired image to form labeled data matched with the image, establishing a training file library simultaneously, and placing the standard data in the training file library.
D) And establishing a configuration file required by training the image, and training the image through an open source algorithm including a YOLO algorithm and based on image recognition and positioning of a deep neural network to generate the image model library.
Before training images, a configuration file for image training needs to be established, and cfg parameters are configured.
The following is an example of a specific cfg parameter, but the example is only for illustrating the case of cfg parameter configuration, and is not intended to limit the present invention:
a. the number of pictures sent to the network for each iteration is batch/subdivisions:
batch=32
subdivisions=2
the larger the batch is, the better the training effect is, the larger the subdivision is, the smaller the memory pressure is occupied
b. Picture enhancement related operation:
angle is 10 picture angle changes in degrees
saturation of 1.5exposure and exposure variation
hue variation range of 0.1
c. Training optimization strategy
Initial learning rate of 0.001
policy is a steps learning strategy,
number of iterations when step is 1000,25000,35000 learning rate changes
scales ═ 10,.5,.2 rate of learning rate change
And (5) starting Multi-Scale Training, and Training pictures with different sizes randomly.
And (4) starting deep training in a YOLO v3 framework, and stopping training when TOP-5 reaches 93.8 so as to generate an image model of the control point.
Furthermore, to ensure accuracy in identifying control points under different conditions. It is necessary to test the accuracy of identifying the control points. If the requirement of work cannot be met, the number of collected pictures needs to be increased, the cfg parameters of training are adjusted, and the steps of establishing the image model library are repeated.
(32) And identifying the control point in the monitoring picture according to the image model in the image model library, and acquiring the two-dimensional coordinate of the control point in the monitoring picture.
Under severe weather conditions, the accuracy of pattern recognition of control points is reduced, and control points with recognition errors need to be eliminated, and the method comprises the following steps:
after the image model library is generated, relative position and angle standards between the control points are set as a basis for subsequently identifying whether the control points are correct:
A) and when all the control points are identified without errors, calculating the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture as the relative position and angle standard between the control points, and recording the relative position and angle standard.
B) And when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly.
The control point is identified by the YOLO v3 algorithm according to the graphic model library with high accuracy, even if the control point is identified with deviation in bad weather or dim light, the control point is only identified with error, that is, the control point a with error is identified as the control point B, and the obtained two-dimensional coordinates are generally accurate. By the characteristic, after two-dimensional coordinates of all the identified control points are identified, relative distances and angles between the two control points are calculated, and the relative distances and angles are verified with accurate data recorded by the relative position and angle standard, and when the distances or angles between a certain control point and other control points and values recorded by the relative position and angle standard have larger deviations, the control point can be considered as an identification error and eliminated.
S2: and establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture.
(1) Establishing a two-dimensional coordinate system
And establishing a two-dimensional coordinate system of the monitoring picture according to the two-dimensional coordinates of the control points in the same monitoring picture.
The coordinate system is established, and the deformation problem of the monitoring picture must be considered, namely the monitored area is a trapezoid with a variable proportion. Through the measurement and calculation of the length and the area of the actual monitoring area of each monitoring camera, the algorithm of the deformation of the monitoring picture of each camera can be obtained.
(2) Establishing a GPS coordinate system
And establishing a GPS coordinate system of the monitoring picture according to the GPS coordinate information of the control points in the same monitoring picture.
In the above two-dimensional coordinate system, the problem of deformation of the monitored picture needs to be considered and solved by the same method.
(3) Establishing coordinate mapping relation
And establishing a mapping relation between the two-dimensional coordinate of the monitoring picture and the GPS coordinate according to the two-dimensional coordinate information and the GPS coordinate information of the control point in the same monitoring picture.
After the two-dimensional coordinate system and the GPS coordinate system of the same monitoring picture are obtained, the mapping relation between the two-dimensional coordinate of the monitoring picture and the GPS coordinate can be conveniently obtained, namely the corresponding two-dimensional coordinate of a certain GPS coordinate in the monitoring picture.
And thus, the mapping relation between the two-dimensional coordinates in the monitoring video and the GPS coordinates is established.
S3: and receiving the GPS coordinates of the vehicle, acquiring the two-dimensional coordinates corresponding to the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates.
Identifying the location of the vehicle from the two-dimensional coordinates, further comprising:
and performing image processing on the monitoring video, and identifying the position of the vehicle in the monitoring picture on the corresponding two-dimensional coordinates of the vehicle in the monitoring picture in a mode including a block diagram.
By the image processing of the monitoring video, the vehicle can be displayed more clearly in the monitoring picture, and the position of the vehicle can be seen clearly.
Regarding the identification of the vehicle position, it is preferable that the coordinates of each vehicle are converted into a rectangle, and the position of the vehicle is displayed in a frame diagram in the monitoring screen by an image processing technique. The present invention is only a preferred method of displaying the location of the vehicle in the form of a block diagram and is not intended to limit the present invention. For example: if the system for monitoring the video can be directly connected with the vehicle system, the picture of the vehicle is directly displayed on the monitoring picture, and the airport manager can see what vehicle is parked at what position at a glance.
Furthermore, the information of the model, the state, the application and the like of the vehicle can be inquired through the vehicle system and displayed in specific application.
Second embodiment
Fig. 2 is a detailed block diagram of a system for identifying the position of a vehicle in a surveillance video according to the present invention. It includes: the system comprises a control point presetting module, a mapping relation establishing module and a vehicle position identification module.
The control point presetting module 11 is configured to preset a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquire, for each control point, a GPS coordinate of the control point and a two-dimensional coordinate of the control point in a monitoring picture;
the mapping relationship establishing module 12 is configured to establish a coordinate mapping relationship between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture;
the vehicle position identification module 13 is configured to receive the GPS coordinates of the vehicle, acquire the two-dimensional coordinates of the vehicle in the monitoring screen according to the coordinate mapping relationship, and identify the position of the vehicle according to the two-dimensional coordinates.
Further, the control point presetting module 11 further includes a GPS coordinate acquisition submodule 111 and a two-dimensional coordinate acquisition submodule 112;
the GPS coordinate acquisition submodule 111 is configured to acquire the GPS coordinate corresponding to the control point;
the two-dimensional coordinate obtaining sub-module 112 is configured to obtain the two-dimensional coordinates of the control point in the monitoring picture, and specifically includes: collecting images of the control points, generating image models of the control points through an image training method including an open source algorithm based on image recognition and positioning of a deep neural network, and establishing an image model library; and identifying the control point in the monitoring picture according to the image model in the image model library, and acquiring the two-dimensional coordinate of the control point in the monitoring picture.
Further, the two-dimensional coordinate obtaining sub-module 112 further includes:
an image model library establishing unit 1121 configured to establish the image model library, specifically: collecting images of each control point, wherein the images comprise the images under different conditions of different weather, different light, different angles and different distances; for the collected image, preprocessing including horizontal and vertical turning, random cutting, random angle rotation and changing of image contrast and brightness is carried out on the image; labeling the category and the position of the control point corresponding to the image aiming at the collected image to form labeled data matched with the image, establishing a training file library simultaneously, and placing the standard data in the training file library; and establishing a configuration file required by training the image, and training the image through an open source algorithm including a YOLO algorithm and based on image recognition and positioning of a deep neural network to generate the image model library.
The relative position and angle calculating unit 1122 calculates the relative position and angle criteria, specifically: when all the control points are identified without errors, calculating the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture as the relative position and angle standard between the control points; and when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly.
Further, the vehicle position identification module 13 further includes:
the image processing sub-module 131 is configured to perform image processing on the surveillance video, and identify a position of the vehicle in the surveillance picture in a manner including a block diagram on the two-dimensional coordinates corresponding to the vehicle in the surveillance picture.
A computer readable storage medium storing computer code which, when executed, performs the method as described above. Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
The software program of the present invention can be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functionality of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various functions or steps. The method disclosed by the embodiment shown in the embodiment of the present specification can be applied to or realized by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Embodiments also provide a computer readable storage medium storing one or more programs that, when executed by an electronic system including a plurality of application programs, cause the electronic system to perform the method of embodiment one. And will not be described in detail herein.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium (tr ansitory medium), such as a modulated data signal and a carrier wave.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PR AM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically erasable programmable read only memory (E EPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information and which can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transducer y media) such as modulated data signals and carrier waves. It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.

Claims (10)

1. A method of identifying a vehicle location in surveillance video, comprising the steps of:
s1: presetting a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquiring a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture aiming at each control point;
s2: establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture;
s3: and receiving the GPS coordinates of the vehicle, acquiring the two-dimensional coordinates corresponding to the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates.
2. The method for identifying the vehicle position in the surveillance video according to claim 1, wherein in step S1, two-dimensional coordinates of the control point in the surveillance picture are obtained, specifically:
collecting images of the control points, generating image models of the control points through an image training method including an open source algorithm based on image recognition and positioning of a deep neural network, and establishing an image model library;
and identifying the control point in the monitoring picture according to the image model in the image model library, and acquiring the two-dimensional coordinate of the control point in the monitoring picture.
3. The method for identifying the position of the vehicle in the surveillance video according to claim 2, wherein the specific steps of establishing the image model library are as follows:
collecting images of each control point, wherein the images comprise the images under different conditions of different weather, different light, different angles and different distances;
for the collected image, preprocessing including horizontal and vertical turning, random cutting, random angle rotation and changing of image contrast and brightness is carried out on the image;
labeling the category and the position of the control point corresponding to the image aiming at the collected image to form labeled data matched with the image, establishing a training file library simultaneously, and placing the standard data in the training file library;
and establishing a configuration file required by training the image, and training the image through an open source algorithm including a YOLO algorithm and based on image recognition and positioning of a deep neural network to generate the image model library.
4. The method of identifying vehicle locations in surveillance video according to claim 3, further comprising, after generating the image model library:
when all the control points are identified without errors, calculating the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture as the relative position and angle standard between the control points;
and when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly.
5. The method for identifying the position of the vehicle in the surveillance video according to claim 1, wherein in step S3, the position of the vehicle is identified according to the two-dimensional coordinates, further comprising:
and performing image processing on the monitoring video, and identifying the position of the vehicle in the monitoring picture on the corresponding two-dimensional coordinates of the vehicle in the monitoring picture in a mode including a block diagram.
6. A system for marking vehicle positions in a monitoring video is characterized by comprising a control point presetting module, a mapping relation establishing module and a vehicle position marking module;
the control point presetting module is used for presetting a plurality of control points in a monitoring area of a monitoring video, and simultaneously acquiring a GPS coordinate of the control points and a two-dimensional coordinate of the control points in a monitoring picture aiming at each control point;
the mapping relation establishing module is used for establishing a coordinate mapping relation between the two-dimensional coordinates of the monitoring picture and the GPS coordinates according to the two-dimensional coordinates and the GPS coordinates of the control points in the same monitoring picture;
the vehicle position identification module is used for receiving the GPS coordinates of the vehicle, acquiring the corresponding two-dimensional coordinates of the vehicle in the monitoring picture according to the coordinate mapping relation, and identifying the position of the vehicle according to the two-dimensional coordinates.
7. The system for identifying a vehicle location in a surveillance video according to claim 6, wherein the control point presetting module further comprises a GPS coordinate acquisition submodule and a two-dimensional coordinate acquisition submodule;
the GPS coordinate acquisition submodule is used for acquiring the GPS coordinate corresponding to the control point;
the two-dimensional coordinate obtaining submodule is configured to obtain the two-dimensional coordinate of the control point in the monitoring picture, and specifically includes: collecting images of the control points, generating image models of the control points through an image training method including an open source algorithm based on image recognition and positioning of a deep neural network, and establishing an image model library; and identifying the control point in the monitoring picture according to the image model in the image model library, and acquiring the two-dimensional coordinate of the control point in the monitoring picture.
8. The system for identifying a vehicle location in surveillance video according to claim 7, wherein the two-dimensional coordinate acquisition sub-module further comprises:
an image model library establishing unit, configured to establish the image model library, specifically: collecting images of each control point, wherein the images comprise the images under different conditions of different weather, different light, different angles and different distances; for the collected image, preprocessing including horizontal and vertical turning, random cutting, random angle rotation and changing of image contrast and brightness is carried out on the image; labeling the category and the position of the control point corresponding to the image aiming at the collected image to form labeled data matched with the image, establishing a training file library simultaneously, and placing the standard data in the training file library; and establishing a configuration file required by training the image, and training the image through an open source algorithm including a YOLO algorithm and based on image recognition and positioning of a deep neural network to generate the image model library.
The relative position and angle calculating unit calculates the relative position and angle standard, and specifically comprises: when all the control points are identified without errors, calculating the relative position and the angle between the two-dimensional coordinates of any two control points in the monitoring picture as the relative position and angle standard between the control points; and when the control points are identified subsequently, the relative position and the angle between the control points are calculated again and compared with the relative position and angle standard, and when the deviation of the relative position and the angle between the control points and the value recorded in the relative position and angle standard exceeds a preset error value, the control points are identified wrongly.
9. The system for identifying vehicle locations in surveillance video according to claim 6, wherein the vehicle location identification module further comprises:
and the image processing submodule is used for carrying out image processing on the monitoring video and identifying the position of the vehicle in the monitoring picture on the corresponding two-dimensional coordinates of the vehicle in the monitoring picture in a mode including a block diagram.
10. A computer readable storage medium storing computer code which, when executed, performs the method of any of claims 1 to 5.
CN202010396990.0A 2020-05-12 2020-05-12 Method and system for identifying vehicle position in monitoring video Pending CN111597954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010396990.0A CN111597954A (en) 2020-05-12 2020-05-12 Method and system for identifying vehicle position in monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010396990.0A CN111597954A (en) 2020-05-12 2020-05-12 Method and system for identifying vehicle position in monitoring video

Publications (1)

Publication Number Publication Date
CN111597954A true CN111597954A (en) 2020-08-28

Family

ID=72183635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010396990.0A Pending CN111597954A (en) 2020-05-12 2020-05-12 Method and system for identifying vehicle position in monitoring video

Country Status (1)

Country Link
CN (1) CN111597954A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114416686A (en) * 2021-12-06 2022-04-29 广州天长信息技术有限公司 Vehicle equipment fingerprint CARID identification system and identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040247173A1 (en) * 2001-10-29 2004-12-09 Frank Nielsen Non-flat image processing apparatus, image processing method, recording medium, and computer program
CN109190508A (en) * 2018-08-13 2019-01-11 南京财经大学 A kind of multi-cam data fusion method based on space coordinates
CN109523471A (en) * 2018-11-16 2019-03-26 厦门博聪信息技术有限公司 A kind of conversion method, system and the device of ground coordinate and wide angle cameras picture coordinate
CN109919975A (en) * 2019-02-20 2019-06-21 中国人民解放军陆军工程大学 A kind of wide area monitoring moving target correlating method based on coordinate calibration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040247173A1 (en) * 2001-10-29 2004-12-09 Frank Nielsen Non-flat image processing apparatus, image processing method, recording medium, and computer program
CN109190508A (en) * 2018-08-13 2019-01-11 南京财经大学 A kind of multi-cam data fusion method based on space coordinates
CN109523471A (en) * 2018-11-16 2019-03-26 厦门博聪信息技术有限公司 A kind of conversion method, system and the device of ground coordinate and wide angle cameras picture coordinate
CN109919975A (en) * 2019-02-20 2019-06-21 中国人民解放军陆军工程大学 A kind of wide area monitoring moving target correlating method based on coordinate calibration

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114416686A (en) * 2021-12-06 2022-04-29 广州天长信息技术有限公司 Vehicle equipment fingerprint CARID identification system and identification method
CN114416686B (en) * 2021-12-06 2023-04-14 广州天长信息技术有限公司 Vehicle equipment fingerprint CARID identification system and identification method

Similar Documents

Publication Publication Date Title
US11842516B2 (en) Homography through satellite image matching
CN109345599B (en) Method and system for converting ground coordinates and PTZ camera coordinates
CN109544870B (en) Alarm judgment method for intelligent monitoring system and intelligent monitoring system
CN110491060B (en) Robot, safety monitoring method and device thereof, and storage medium
CN108875531B (en) Face detection method, device and system and computer storage medium
CN112364843A (en) Plug-in aerial image target positioning detection method, system and equipment
CN110866515A (en) Method and device for identifying object behaviors in plant and electronic equipment
CN116311084B (en) Crowd gathering detection method and video monitoring equipment
CN111091104A (en) Target object protection detection method, device, equipment and storage medium
CN111597954A (en) Method and system for identifying vehicle position in monitoring video
CN111316324A (en) Automatic driving simulation system, method, equipment and storage medium
CN112001453B (en) Method and device for calculating accuracy of video event detection algorithm
CN109800684A (en) The determination method and device of object in a kind of video
CN112802100A (en) Intrusion detection method, device, equipment and computer readable storage medium
CN111615062A (en) Target person positioning method and system based on collision algorithm
CN111680680A (en) Object code positioning method and device, electronic equipment and storage medium
CN112651351B (en) Data processing method and device
CN113505643A (en) Violation target detection method and related device
CN114463654A (en) State detection method, device, equipment and computer storage medium
CN112802058A (en) Method and device for tracking illegal moving target
CN112818780A (en) Defense area setting method and device for aircraft monitoring and identifying system
CN113591543A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN111583336A (en) Robot and inspection method and device thereof
CN112649813B (en) Method for indoor safety inspection of important place, inspection equipment, robot and terminal
CN115376275B (en) Construction safety warning method and system based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination