CN112991742B - Visual simulation method and system for real-time traffic data - Google Patents
Visual simulation method and system for real-time traffic data Download PDFInfo
- Publication number
- CN112991742B CN112991742B CN202110427414.2A CN202110427414A CN112991742B CN 112991742 B CN112991742 B CN 112991742B CN 202110427414 A CN202110427414 A CN 202110427414A CN 112991742 B CN112991742 B CN 112991742B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- frame
- data table
- video
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000004088 simulation Methods 0.000 title claims abstract description 50
- 230000000007 visual effect Effects 0.000 title claims abstract description 32
- 238000012544 monitoring process Methods 0.000 claims abstract description 85
- 238000003491 array Methods 0.000 claims abstract description 28
- 238000013507 mapping Methods 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 abstract description 9
- 238000013473 artificial intelligence Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/15—Vehicle, aircraft or watercraft design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/12—Timing analysis or timing optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Geometry (AREA)
- Tourism & Hospitality (AREA)
- Multimedia (AREA)
- Human Resources & Organizations (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Development Economics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Educational Administration (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to the technical field of intelligent traffic, in particular to a visual simulation method and a visual simulation system for real-time traffic data, which comprise the following steps: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time; dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence; acquiring vehicle information in each video segment, and storing all arrays of the video segments into a data table; when traversing to a data table, starting traversing by taking the (N + 1) th frame as a first frame, and reading the vehicle information in the data table; and when traversing to the last frame of the data table, entering the next data table, starting traversing by taking the (N + 1) th frame of the data table as the first frame, continuously mapping the vehicle in the digital twin city, and generating a continuous real-time traffic simulation video. The invention improves the processing efficiency of the monitoring video stream, reduces the delay of visual display, ensures the continuity of the analog video and reduces the condition of data interruption.
Description
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a visual simulation method and system for real-time traffic data.
Background
With the increasing busy economic trade and social activity of cities in China, the urban traffic has unprecedented rapid growth, and at present, the traffic problem of cities, particularly big cities, in China is quite prominent, and the city management becomes more and more complex. In the process of managing cities, a road monitoring system plays an important role in public security and control, road monitoring points of the cities are mainly distributed at road intersections and key road sections with concentrated traffic flow and pedestrian flow, the composition, flow distribution and violation conditions of running vehicles on roads are automatically recorded all the year round, and important basic and running data are provided for traffic planning, traffic management and road maintenance departments.
While road monitoring points are continuously huge, monitoring systems face a serious problem: the method has the advantages that the method has the problems of dispersed and isolated massive videos, incomplete visual angle, undefined positions and the like, and video pictures can not effectively present real space geographic scenes and can not carry out effective and intuitive traffic information extraction, vehicle tracking and command scheduling. With the appearance of the application form of video + GIS, monitoring video stream data and three-dimensional model data of a monitoring scene are combined, the monitoring video stream can dynamically display the change of a real scene in real time, and the three-dimensional model can accurately and truly reflect the spatial characteristics of the real world so as to solve the problems.
The method of overlaying real-time monitoring video stream to the GIS scene faces some newly generated "real-time" technical problems, including: 1. because the real-time monitoring video stream data volume is large, the processing efficiency of the monitoring video stream needs to be improved to realize real-time visual display so as to reduce the delay of the visual display as much as possible; 2. because the video stream is monitored in real time, namely the continuity of the video needs to be ensured in the process of real-time visual display, and the condition of data interruption is reduced.
Disclosure of Invention
The invention aims to provide a visual simulation method and a visual simulation system for real-time traffic data, so as to improve the problems.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
a visual simulation method of real-time traffic data comprises the following steps:
building a digital twin city based on the real city;
receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence, and overlapping N frames of images between any two adjacent sections of videos;
the method comprises the steps that vehicle information in each section of video is obtained, the vehicle information of each vehicle is stored in different arrays respectively, at least the initial frame number and the final frame number of the vehicle are recorded in each array, and all the arrays of the section of video are stored in one data table to obtain a plurality of data tables;
sequentially traversing the data table according to the time sequence, starting traversing by taking the (N + 1) th frame as a first frame when traversing to the data table, reading the vehicle information in the data table, and mapping each vehicle in the digital twin city;
and when traversing to the last frame of the data table, entering the next data table, starting traversing by taking the (N + 1) th frame of the data table as the first frame, continuously mapping the vehicle in the digital twin city, and generating a continuous real-time traffic simulation video.
Further, the receiving a real-time road monitoring video stream corresponding to the digital twin city in real time, and dividing a preset driving area in the digital twin city according to the monitoring area of the monitoring video stream includes:
determining roads needing to be monitored in a real city;
receiving a monitoring video stream shot by the road in real time;
and positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city.
Further, the dividing the monitoring video stream into a plurality of segments of videos with the same duration in time sequence, where N frames of images are overlapped between any two adjacent segments of videos includes:
setting the duration of each video segment as T and the number of overlapped frames as N;
determining a starting time stamp t 'and an ending time stamp t' of each video, wherein N frames of images are overlapped between the ending time stamp of the previous video and the starting time stamp of the next video;
and dividing the monitoring video stream into a plurality of sections of videos with the same time length according to the starting time stamp t 'and the ending time stamp t' of each section of video.
Further, the vehicle information in each section of video is obtained, the vehicle information of each vehicle is respectively stored in different arrays, at least the starting frame number and the ending frame number of the vehicle are recorded in each array, and all the arrays of one section of video are stored in one data table to obtain a plurality of data tables, including:
utilizing an AI recognition algorithm to divide a section of video into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame of image and actual offset coordinates;
calculating the running speed of the vehicle in the current frame image by using the front and rear N frames of images of the current frame image, and tracking the running track of the vehicle;
storing vehicle information of a vehicle into an array, each array at least comprising 8 groups of data, respectively: the starting frame number when the vehicle appears, the ending frame number when the vehicle disappears, the vehicle type, the vehicle color, the license plate number, the X-axis offset coordinate, the Y-axis offset coordinate and the running speed; the X-axis offset coordinate, the Y-axis offset coordinate and the running speed all comprise data of all frame images from a starting frame number to an ending frame number;
and obtaining a plurality of arrays, storing the arrays into a data table, and recording the total frame number of the data table.
Further, the upper left corner of the video picture is used as a coordinate origin;
selecting at least two characteristic points in a video picture as a reference point and a mark point, and calculating the actual distance of a unit pixel by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
and calculating the offset coordinate of the vehicle, namely the actual distance of the vehicle relative to the mark point by using the first pixel coordinate and the actual distance of the unit pixel.
Further, the data table is traversed sequentially according to the time sequence, and when the data table is traversed to the data table DmWhen the data table D is read, the traversal is started by taking the (N + 1) th frame as the first framemThe vehicle information of (1), mapping data of each vehicle in the digital twin city, comprising:
data table D for traversing a video segment starting from the (N + 1) th frame imagemReading the current ergodic frame number;
when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
determining a coordinate position generated by a simulated vehicle in a digital city scene according to the offset coordinate of the vehicle, and when the coordinate position is outside a preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
and when the number of the ending frames is traversed, destroying the simulated vehicles which normally disappear in the preset driving area.
Further, when the coordinate position is outside the preset driving area, the simulated vehicle is generated after correcting the coordinate position of the simulated vehicle, including:
if the coordinate position in the current frame image is outside the preset driving area, recalculating the offset coordinate according to the driving speed set by the previous frame image, and continuously generating the simulated vehicle at the coordinate position corresponding to the offset coordinate.
Further, when traversing to the number of the termination frames, destroying the simulated vehicle which normally disappears in the preset driving area, including:
when traversing to the number of the termination frames, judging whether the simulated vehicle disappears outside a preset driving area or not;
if yes, destroying the simulated vehicle;
if not, recalculating the set running speed of the previous frame image to obtain an offset coordinate;
judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
and if not, continuing to generate the simulated vehicle at the offset coordinate.
Further, the traversal to the data table DmThe last frame of (2), enter next data table Dm+1By data table Dm+1The (N + 1) th frame as the first frame starts traversing, vehicles are continuously mapped in the digital twin city, and a continuous real-time traffic simulation video is generated, and the method comprises the following steps:
judging whether to traverse to the data table DmThe last frame of (a);
if yes, jumping to the data table Dm+1The (N + 1) th frame is continuously traversed;
when traversing to the initial frame number of an array, reading the vehicle information in the array, and judging whether the license plate number of the vehicle is in contact with the data table DmThe license plate numbers of the simulated vehicles are the same;
if the simulation data are different, generating a new simulation vehicle, and transmitting the array to the new simulation vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
A visual simulation system of real-time traffic data, the system comprising:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence, and overlapping N frames of images between any two adjacent sections of videos;
a storage unit: the method comprises the steps that vehicle information in each section of video is obtained, the vehicle information of each vehicle is stored in different arrays respectively, at least the initial frame number and the final frame number of the vehicle are recorded in each array, and all the arrays of the section of video are stored in one data table to obtain a plurality of data tables;
the method specifically comprises the following steps:
utilizing an AI recognition algorithm to divide a section of video into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame of image and actual offset coordinates;
the calculation process of the pixel coordinates and the actual offset coordinates of the vehicle in the current frame image comprises the following steps:
taking the upper left corner of a video picture as a coordinate origin;
selecting at least two characteristic points in a video picture as a reference point and a mark point, and calculating the actual distance of a unit pixel by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
and calculating the offset coordinate of the vehicle, namely the actual distance of the vehicle relative to the mark point by using the first pixel coordinate and the actual distance of the unit pixel.
A video generation unit: sequentially traversing the data table according to the time sequence, and when the data table is traversed to the data table DmWhen the data table D is read, the traversal is started by taking the (N + 1) th frame as the first framemThe vehicle information in (1), mapping each vehicle in the digital twin city;
a video connection unit: when traversing to the data table DmThe last frame of (2), enter next data table Dm+1By data table Dm+1As the first frame, starts a passAnd continuing to map the vehicles in the digital twin city to generate continuous real-time traffic simulation videos.
The invention has the beneficial effects that:
1. the method comprises the steps of receiving road monitoring video stream in real time, cutting off the received monitoring video stream when the time length of the received monitoring video stream reaches a preset time length, and dividing the monitoring video stream into a section of video, namely continuously dividing the monitoring video stream received in real time into a plurality of sections of video according to the preset time length; meanwhile, after the first section of video is divided, vehicle information in the video is extracted, the video of one vehicle is independently stored in an array, all the arrays in the section of video are stored in the UE4 in a data table mode to form a data table, once the data table is formed, the data table can be traversed, and the simulated vehicle is generated and mapped in a digital twin city; after the first section of video is processed, the second section of video is also divided, and the second section of video and the subsequent third and fourth sections are processed according to the method until the transmission of the monitoring video stream is stopped.
2. According to the invention, the real-time monitoring video stream and the three-dimensional model of the monitored scene are accurately fused in real time, so that a plurality of monitoring video streams distributed at different positions and different angles can be brought into the full-space three-dimensional scene of the unified space reference, and the functions of checking, replaying, tracking a monitoring route, tracking a target and the like of the monitoring video stream at any position and at any angle can be realized. The method is applied to city management, can provide important basic and operation data for traffic planning, traffic management and road maintenance departments, and provides important technical means and evidences for quickly correcting traffic violation behaviors, quickly detecting traffic accident escape and motor vehicle robbery cases, such as: suspect vehicle deployment and control, hit-and-run vehicle tracking, incapability of identifying abnormal event accidents by human eyes, illegal vehicles and the like.
3. When the video is cut off, N frames of images are overlapped between two adjacent sections of videos, and the inter-frame correlation between the front and rear N frames of images and the current image is utilized to determine whether the running track of the same vehicle exists or not so as to track the vehicle, ensure that the information of the vehicle is correctly recorded in an array, and prevent the visual simulation data from being interrupted and the intermittent simulation condition from occurring due to the fact that the subsequent data calling is wrong. N frames of images are overlapped between two adjacent videos to provide guarantee for simulating the video effect.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic flow chart illustrating a method for visually simulating real-time traffic data according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data table traversal method described in embodiments of the present invention;
FIG. 3 is a detailed flow chart of array traversal as described in the embodiments of the present invention;
fig. 4 is a detailed flow chart of the destruction vehicle in the embodiment of the invention;
FIG. 5 is a video frame marked with landmark points and reference points according to an embodiment of the present invention;
fig. 6 is a frame image on which a travel track is drawn according to the embodiment of the present invention;
fig. 7 is a schematic diagram of the visual expression effect of the surveillance video stream in the digital twin city scene according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers or letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example 1
As shown in fig. 1, the present embodiment provides a visual simulation method of real-time traffic data, which includes the following steps:
s1, building a digital twin city based on a real city;
specifically, a map of a city is obtained, and according to the map data and a real scale, a digital twin city is built in a development platform, where the development platform includes UE4, Unity, and OSG, and preferably, a UE4 development platform is adopted in this embodiment.
S2, receiving real-time road monitoring video streams corresponding to the digital twin city in real time, dividing a preset driving area in the digital twin city according to the monitoring area of the monitoring video streams, and automatically identifying whether a simulated vehicle is generated outside the preset driving area in the visualization process so as to correct error data;
based on the above embodiment, the S2 specifically includes:
s21, determining a road needing to be monitored in a real city, and building a monitoring real-time monitoring network storage and server above the road, wherein the monitoring comprises unmanned aerial vehicle aerial photography and a road monitoring camera; preferably, the cameras are arranged at positions between the roads on the two sides, so that the vehicles on the roads on the left side and the right side are shot as clearly as possible, and the heads or the tails of the vehicles need to be shot, so that the AI identification algorithm in the subsequent steps can identify the license plate numbers conveniently, and the higher the definition is, the more accurate the identification is;
s22, receiving the monitoring video stream shot by the road in real time, specifically, receiving a transmission protocol adopted by the monitoring video stream in real time according to a client request, selecting a proper coding and decoding mode, and transmitting the real-time dynamic video stream of the camera to the client according to a frame request;
s23, positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city, wherein the part of the road extending out of the monitoring video stream does not belong to the preset driving area.
S3, dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence, and overlapping N frames of images between any two adjacent sections of videos;
based on the above embodiment, the S3 specifically includes the following steps:
s31, setting the duration of each video segment as T and the number of overlapped frames as N;
s32, determining a start time stamp t 'and an end time stamp t' of each video, wherein N frames of images are overlapped between the end time stamp of the previous video and the start time stamp of the next video;
specifically, for example: the start timestamp of the first segment of video is t'1Then the time stamp t is terminated "1=t'1+ T; calculating the duration of the N frames of images to be T ', then the starting time stamp T ' of the second video segment '2=t”1-T ', a termination timestamp T'2'=t'2+ T, and so on;
and S33, dividing the monitoring video stream into a plurality of sections of videos with the same time length according to the starting time stamp and the ending time stamp of each section of video.
In this embodiment, timing is started from a timestamp of a received monitoring video stream, when the duration of the received video stream is T, a first segment of video is divided, then the first segment of video is processed by using subsequent steps S4-S6, meanwhile, a second segment of video is divided, after the first segment of video is processed, the second segment of video is started to be processed, and so on, the video stream dividing process and the video processing process are synchronously performed, so as to improve the processing efficiency of the video.
S4, obtaining vehicle information in each section of video, respectively storing the vehicle information of each vehicle into different arrays, wherein at least a starting frame number and a terminating frame number of the vehicle are recorded in each array, and storing all the arrays of the section of video into one data table to obtain a plurality of data tables;
based on the above embodiment, the S4 specifically includes the following steps:
s41, segmenting a section of video into a plurality of frame images by utilizing an AI (Artificial intelligence) recognition algorithm, and recognizing vehicle information in each frame image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame image and actual offset coordinates;
specifically, the S41 includes the following steps:
s411, inputting a section of video into an AI identification algorithm, automatically dividing the monitoring video stream into single-frame images by the AI identification algorithm, and storing the single-frame images according to a time sequence; specifically, in the obtained single-frame image, each frame of image can be continuously obtained, or a plurality of frames of images can be obtained at intervals, and then the obtained frames of images are stored for the next identification operation;
s412, identifying all vehicles in each frame of image, and the types and colors of the vehicles, and identifying the identified vehicles;
specifically, an optimized Complecar data set is used for training a YOLOv3 algorithm, wherein the Complecar data set is a public data set which is largest in scale and most abundant in category and is used for evaluating fine recognition of vehicles at present, vehicle images are acquired by the data set through a network and monitoring equipment, wherein the number of the network images is 136726, 1716 vehicle models of 163 automobile manufacturers are covered, and the number of the monitoring images is 44481 vehicle front images which comprise 281 vehicle models. Performing full-frame detection on the frame image by using a trained YOLOv3 algorithm, framing all recognized vehicles with rectangular frames, and recognizing the type and color of each vehicle, wherein the type of the vehicle is customized, and in this embodiment, the customized type of the vehicle recognition includes: the cross-country vehicle, the car, the minibus, the freight train, the bus, the motorcycle, according to actual need, the kind that the increase vehicle discerned.
S413, identifying the license plate number of the vehicle;
specifically, a sample set is made, a pre-trained target detection network based on a YOLOv3 algorithm is adopted based on the fusion characteristics of video images, after offline learning is carried out on the sample set, the detection of the license plate number of a vehicle in a frame image is realized, and the vehicle and the corresponding license plate number are subjected to associated marking;
s414, determining pixel coordinates of the vehicle in each frame of image, and calculating actual offset coordinates according to the pixel coordinates;
specifically, the step S414 includes the following steps:
s4141, taking the upper left corner of the video picture as a coordinate origin O (0, 0);
s4142, selecting at least two feature points in a video picture as mark points Q1(x1,y1) And a reference point Q2(x2,y2) Using said marking point Q1And a reference point Q2Calculating the actual distance of the unit pixel;
specifically, since the shooting angle of view of the surveillance video stream is fixed, the video frame of each frame of image is the same. It is only necessary to intercept any frame of video image from the monitoring video stream and search for feature points, which have characteristics convenient to determine, such as: intersection of two lane boundariesA point, a midpoint or a terminal point of a lane boundary, a connection point of a road sign and the ground, or a connection point of a street lamp and the ground; wherein at least one feature point is used as a mark point, and the mark point is the same position point Q 'in the scene of the digital twin city'1Marking, establishing a mapping relation between a video picture and a scene, and calculating the actual distance of a unit pixel by using a reference point and a mark point;
specifically, the actual distance of the unit pixel on the x-axis is calculated as follows:
wherein, the P'xDistance (x'1-x'2) Is the actual distance between Q1 and Q2 on the x-axis, | x1-x2The problem can be solved by adjusting the parameter a, wherein |, is the pixel distance between Q1 and Q2 in the x-axis direction, and a is an adjustment parameter of the distance between the two, and the actual distance of the unit pixel close to the lens side is smaller than the actual distance of the unit pixel far away from the lens side because the video shooting angle is oblique shooting;
specifically, the Distance (x'1-x'2) The method can be obtained according to the longitude and latitude of two points, and the calculation process is as follows:
obtained by the formulae (2) to (4):
wherein, R is the radius of the earth,respectively, the latitudes of the two points, and Δ ρ is the difference in longitude between the two points.
Specifically, the calculation process of the adjustment parameter a is as follows:
wherein H is the total length of pixels on the y axis, yPThe pixel coordinate of any pixel point P on the y axis;
similarly, the actual distance of the unit pixel on the y axis is calculated by:
wherein, the P'yDistance (y'1-y'2) Is the actual distance between Q1 and Q2 on the y-axis, | y1-y2I is the pixel distance between Q1 and Q2 in the y-axis direction, and a is the adjustment parameter of the distance;
referring to fig. 5, Q1 and Q2 are respectively a mark point Q1 and a reference point Q2 selected in the present embodiment, and their pixel coordinates relative to the origin O of coordinates are respectively Q1(1045,475) and Q2(26,653).
S4143, determining a first pixel coordinate C (x) of the vehicle relative to a coordinate origin in the current frame imagec,yc);
S4144, calculating offset coordinates C (x ') of the vehicle by using the first pixel coordinates and the actual distance of the unit pixel'c,y'c) I.e. vehicle relative to the marking point Q1(x1,y1) The actual distance of (2) is calculated as follows:
s42, calculating the running speed of the vehicle in the current frame image by using the front and back N frame images of the current frame image, and tracking the running track of the vehicle;
specifically, the S42 includes the following steps:
s421, respectively obtaining offset coordinates of the vehicle in the front frame image and the back frame image of the current frame;
in this embodiment, the offset coordinate of the first 5 frames (F-5) and the offset coordinate of the last 5 frames (F +5) of the current frame F are obtained;
s422, by using a displacement-velocity formula:calculating the running speed of the vehicle in the current frame image, wherein v represents the running speed of the vehicle in the current frame image, x represents the distance between the offset coordinate of the first 5 frames (F-5) and the offset coordinate of the last 5 frames (F +5), and t represents the time interval between the two frame images of the first 5 frames and the last 5 frames;
s423, using continuous first N frame images, (it should be noted that if the interval is stored at intervals, the number of interval frames is 5, the continuous first N/5 frame images are used for tracking, and the same applies to the first and second N frames), using continuous first 5 frame images to track the driving track of the vehicle by using the STRCF algorithm, and drawing the driving track of each vehicle in the frame images, please refer to fig. 6, in which not only the driving track of each vehicle is drawn, but also the driving speed and the offset coordinates of the vehicle are drawn;
specifically, the calculation process of the STRCF algorithm is as follows:
wherein E is the total number of the feature maps which are fused with the convolution nerve features and the HOG nerve featuresCharacteristic diagram of characteristic surveillance video stream, E ∈ [1, E ∈ [ ]]T is the total frame number, xtFor the T frame image, T is the [1, T ]], Is the e-th feature map, f, in the t-th frame imageeCorrelation filter, y, showing the learned characteristics of the e-th feature maptIs the label of the image of the t-th frame, w is a space regularization weight function, f is a correlation filter learned by the t-th frame, ft-1Correlation filter learned for the t-1 th frame, f0=f1(ii) a μ represents the temporal regularization factor, operator · represents the Hadamard product, | | | | | | represents the modulus of the vector.
S43, storing the vehicle information of one vehicle into one array, wherein each array at least comprises 8 groups of data, which are respectively as follows: starting frame number when vehicle appears, ending frame number when vehicle disappears, vehicle type, vehicle color, license plate number, X-axis offset coordinate (X'c) Y-axis offset coordinate (Y'c) Driving speed; the X-axis offset coordinate, the Y-axis offset coordinate and the running speed all comprise data of all frame images from a starting frame number to an ending frame number;
s44, obtaining a plurality of arrays, storing the arrays into a data table, recording the total frame number of the data table, obtaining a csv file storing vehicle information after processing a monitoring video stream by an AI (analog to digital) recognition algorithm, and storing the csv file into the UE4 in the form of the data table;
s45, repeating the steps, storing the vehicle information in each video into a data table to obtain a plurality of data tables D, wherein the data tables D are a continuous generation process, and after the first data table is generated, starting to perform step S5, reading the data in the first data table to perform visual simulation in the digital twin city, so that the time difference between the generated simulated video and the video in the real road in the digital twin generation is a video processing time + T, and the speed of rendering one frame of image can be increased to 0.2/frame by using the step S4 in the embodiment.
The specific embodiment is as follows: setting the time length T of a video to be 2s, setting the frame rate of a monitoring video stream to be 30 frames/s, then 60 frames of images exist in the video, storing one frame of image every 5 frames, storing 12 frames of images, calculating the time required for processing the video to be 2.4s, and calculating the time difference between the simulated video and the video in the real road to be 4.4 s.
S5, traversing the data table in sequence according to the time sequence, and traversing to the data table DmWhen the data table D is read, the traversal is started by taking the (N + 1) th frame as the first framemThe vehicle information in (1), mapping each vehicle in the digital twin city;
referring to fig. 2, based on the above embodiment, the step S5 specifically includes the following steps:
s51, traversing a data table D of a section of video from the N +1 th frame imagemReading the current ergodic frame number;
as shown in step S42, the driving speed of the vehicle in the current frame image is calculated by using the first N frame image and the last N frame image, and the driving track of the vehicle is tracked by using consecutive first N frames, so that two adjacent videos need to be separated by N frame images. During the visualization process, to avoid generating overlapped videos, the data in one data table is traversed by using the N +1 th frame image as the first frame (if the images are stored at 5 frames apart, the traversal should be started by using the N +5 th frame image as the first frame).
S52, when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle; preferably, the method further comprises creating a vehicle identification control object for reading the associated vehicle information in the data table;
specifically, a SetTimerbyEvent node in the UE4 is used to cycle through a data table, each time the number of times of current cycle, that is, the number of frames traversed, is recorded, when the number of frames traversed is the same as a certain initial number of frames in an array, processing a vehicle related to the initial number of frames is started, the related vehicle is used as a target to generate a corresponding simulated vehicle, the simulated vehicle invokes a corresponding model and color in a preset library according to the model and color in vehicle information to generate a simulated vehicle, wherein the preset library prestores models of a truck, a trolley, a truck, a bus and a motorcycle, prestores a plurality of colors, and the simulated vehicle invokes a database according to data in the array to generate a vehicle similar to a real vehicle.
Specifically, after the simulated vehicle is generated, the array data of the relevant vehicle is received, and the data required by the whole moving process of the vehicle after the vehicle is generated is included when the simulated vehicle is generated.
S53, determining the coordinate position of the simulated vehicle generated in the digital city scene according to the offset coordinates of the vehicle, specifically, after the vehicle A is generated in the 3 rd frame, a method for executing a timing cycle is also stated in the array, traversing the data in the array, namely executing the data once every other frame time, and reading the X-axis offset coordinates (X ') of the vehicle A in the current frame image'c) Y-axis offset coordinate (Y'c) -speed of travel, positioning to the offset coordinate (x ') in the scene of a digital twin city'c,y'c) From the offset coordinate (x'c,y'c) Generating a simulated vehicle for the central point, namely generating the simulated vehicle to the corresponding position in the digital twin city scene at what position the current frame A vehicle is in the video image, and simultaneously generating the driving speed v of the current framecMarked above the simulated vehicle. The current frame number is recorded every time of execution, and the traversal is finished until the frame number reaches 81. It should be noted that, a given license plate number is generated while a simulated vehicle is generated, and the license plate number moves along with the driving track of the vehicle, but the license plate number is not read in the subsequent array traversing process.
When the coordinate position is outside the preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
specifically, the data obtained by the AI recognition algorithm may have unreasonable driving trajectories, such as: when the traveling locus of the vehicle is traced on a lawn or collides with an obstacle, it is necessary to rationalize the data. When the vehicle is about to drive out of the preset driving area and collide with an obstacle, the reading of the real-time data is immediately cut off, and the simulated vehicle is moved by using the simulated data, namely, the simulated vehicle is driven towards a set lane.
Referring to fig. 3, based on the above embodiment, the step S53 specifically includes the following steps:
judging whether the offset coordinate of the simulated vehicle in the current vehicle is in a preset driving area or not;
if yes, generating a simulated vehicle according to the read data, specifically, positioning to a corresponding scene in the monitoring video stream in the digital twin city, and finding a corresponding mark point Q 'in a video picture at the scene'1From said mark point Q'1Calculating an offset coordinate for the origin to generate a simulated vehicle;
if not, the offset coordinates need to be corrected, the offset coordinates are recalculated according to the running speed set by the previous frame of image, and the simulated vehicle is continuously generated.
Preferably, the license plate number and the running speed are displayed above the generated simulated vehicle right above the simulated vehicle, please refer to fig. 7;
and S54, when traversing to the number of the ending frames, destroying the simulated vehicles which normally disappear in the preset running area, specifically, because the AI recognition algorithm is used for recognizing the vehicles, the situation that the recognition is discontinuous (namely, one vehicle is recognized after being recognized during the recognition process) exists, the same vehicle is caused to have a plurality of arrays, and one array generates one simulated vehicle, so that a plurality of simulated vehicles are generated.
Referring to fig. 4, based on the above embodiment, the S54 specifically includes the following steps:
s541, in the termination frame image, judging whether the simulated vehicle disappears out of a preset driving area, namely whether the offset coordinate of the simulated vehicle in the termination frame image is out of the driving area;
if yes, destroying the simulated vehicle;
if not, continuously calculating the set running speed of the previous frame image to obtain an offset coordinate;
s542, judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
if not, continuing to generate the simulated vehicle at the offset coordinate;
the above steps S541 to S542 are repeated.
Specifically, the method comprises the following steps: when the running track of one vehicle (the B vehicle) is interrupted, the B vehicle continues to move at the last running speed, a simulated vehicle is newly generated in a certain frame of image later, and the position of the new simulated vehicle is very close to the position of the B vehicle, the B vehicle is identified again in the following process, so that the new simulated vehicle is deleted after the array data of the new simulated vehicle is given to the B vehicle, and the B vehicle continues to move along the running track in the newly obtained data.
S6, traversing to a data table DmThe last frame of (2), enter next data table Dm+1By data table Dm+1The (N + 1) th frame is used as a first frame to start traversing, and the vehicles are continuously mapped in the digital twin city to generate a continuous real-time traffic simulation video;
based on the above embodiment, the S6 specifically includes the following steps:
when traversing to the data table Dm+1When the initial frame number of the first array is in the first group, the vehicle information in the first group is read, and whether the license plate number of the vehicle is in the same number with the data table D or not is judgedmThe license plate numbers of the simulated vehicles are the same;
if the simulation data are different, generating a new simulation vehicle, and transmitting the array to the new simulation vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
Through the step S6, any two sections of continuous data tables can generate continuous and smooth videos in the digital twin city, when the client continuously receives the monitoring video stream, the vehicle information extraction and storage data tables are carried out after the monitoring video stream is continuously divided, meanwhile, the data in the data tables are traversed, and virtual vehicles are generated in the digital twin city, so that the real-time simulation of the monitoring video stream is realized.
Due to the fact that the data volume is too large, after the vehicle data in the monitoring video stream are extracted, the monitoring video stream needs to be deleted to reduce storage pressure of the client.
Example 2
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a system for visualizing and simulating real-time traffic data, and the device for visualizing and simulating real-time traffic data described below and the method for visualizing and simulating real-time traffic data described above may be referred to in a corresponding manner.
The system comprises the following modules:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence, and overlapping N frames of images between any two adjacent sections of videos;
a storage unit: the method comprises the steps that vehicle information in each section of video is obtained, the vehicle information of each vehicle is stored in different arrays respectively, at least the initial frame number and the final frame number of the vehicle are recorded in each array, and all the arrays of the section of video are stored in one data table to obtain a plurality of data tables;
the method specifically comprises the following steps:
utilizing an AI recognition algorithm to divide a section of video into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame of image and actual offset coordinates;
the calculation process of the pixel coordinates and the actual offset coordinates of the vehicle in the current frame image comprises the following steps:
taking the upper left corner of a video picture as a coordinate origin;
selecting at least two characteristic points in a video picture as a reference point and a mark point, and calculating the actual distance of a unit pixel by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
and calculating the offset coordinate of the vehicle, namely the actual distance of the vehicle relative to the mark point by using the first pixel coordinate and the actual distance of the unit pixel.
A video generation unit: sequentially traversing the data table according to the time sequence, starting traversing by taking the (N + 1) th frame as a first frame when traversing to the data table, reading the vehicle information in the data table, and mapping each vehicle in the digital twin city;
a video connection unit: and when traversing to the last frame of the data table, entering the next data table, starting traversing by taking the (N + 1) th frame of the data table as the first frame, continuously mapping the vehicle in the digital twin city, and generating a continuous real-time traffic simulation video.
It should be noted that, regarding the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Example 3
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a visualized simulation device of real-time traffic data, and a congestion simulation device based on real-time traffic data described below and a visualized simulation method of real-time traffic data described above may be referred to correspondingly.
The electronic device may include: a processor, a memory. The electronic device may also include one or more of a multimedia component, an input/output (I/O) interface, and a communication component.
The processor is used for controlling the overall operation of the electronic equipment so as to complete all or part of the steps in the visual simulation method of the real-time traffic data. The memory is used to store various types of data to support operation at the electronic device, which may include, for example, instructions for any application or method operating on the electronic device, as well as application-related data such as contact data, messaging, pictures, audio, video, and so forth. The memory may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable Read-only memory (EEPROM), erasable programmable Read-only memory (EPROM), programmable Read-only memory (PROM), Read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. The multimedia components may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in a memory or transmitted through a communication component. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface provides an interface between the processor and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component is used for carrying out wired or wireless communication between the electronic equipment and other equipment. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding communication component may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic device may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components, for performing the above-mentioned visual simulation method of real-time traffic data.
In another exemplary embodiment, a computer-readable storage medium is also provided comprising program instructions which, when executed by a processor, implement the steps of the above-described method for visual simulation of real-time traffic data. For example, the computer readable storage medium may be the above-mentioned memory including program instructions executable by a processor of an electronic device to perform the above-mentioned visual simulation method of real-time traffic data.
Example 4
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a readable storage medium, and a readable storage medium described below and a visualization simulation method of real-time traffic data described above may be correspondingly referred to each other.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for visual simulation of real-time traffic data of the above-mentioned method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and may store various program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A visual simulation method of real-time traffic data is characterized by comprising the following steps:
building a digital twin city based on the real city;
receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence, and overlapping N frames of images between any two adjacent sections of videos;
the method comprises the steps that vehicle information in each section of video is obtained, the vehicle information of each vehicle is stored in different arrays respectively, at least the initial frame number when the vehicle appears and the ending frame number when the vehicle disappears are recorded in each array, and all the arrays of one section of video are stored in one data table to obtain a plurality of data tables; sequentially traversing the data table according to the time sequence, and when the data table is traversed to the data table DmWhen the data table D is read, the traversal is started by taking the (N + 1) th frame as the first framemThe vehicle information in (1), mapping each vehicle in the digital twin city;
when traversing to the data table DmThe last frame of (2), enter next data table Dm+1By data table Dm+1The (N + 1) th frame is used as a first frame to start traversing, and the vehicles are continuously mapped in the digital twin city to generate a continuous real-time traffic simulation video;
the method comprises the steps of obtaining vehicle information in each section of video, storing the vehicle information of each vehicle into different arrays, wherein at least a starting frame number when the vehicle appears and a stopping frame number when the vehicle disappears are recorded in each array, and storing all the arrays of the section of video into one data table to obtain a plurality of data tables;
utilizing an AI recognition algorithm to divide a section of video into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, a first pixel coordinate of the vehicle in the current frame of image and an actual offset coordinate;
the calculation process of the first pixel coordinate and the actual offset coordinate of the vehicle in the current frame image comprises the following steps:
taking the upper left corner of a video picture as a coordinate origin;
selecting at least two feature points in the video picture as a reference point and a mark point, wherein the feature points have characteristics convenient to determine, and include but are not limited to a crossing point of two lane boundaries, a middle point or an end point of the lane boundaries, a connection point of a road sign and the ground or a connection point of a street lamp and the ground;
calculating the actual distance of the unit pixel on the x axis and the y axis by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
and calculating the actual distances of the vehicle relative to the mark point on the x axis and the y axis by using the first pixel coordinate and the actual distances of the unit pixels on the x axis and the y axis to obtain the offset coordinate of the vehicle, wherein the offset coordinate of the vehicle is the actual coordinate with the mark point as the origin.
2. The visual simulation method of real-time traffic data according to claim 1, wherein the receiving of the real-time road monitoring video stream corresponding to the digital twin city in real time, and the dividing of the preset driving area in the digital twin city according to the monitoring area of the monitoring video stream comprise:
determining roads needing to be monitored in a real city;
receiving a monitoring video stream shot by the road in real time;
and positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city.
3. The visual simulation method of real-time traffic data according to claim 1, wherein the step of dividing the surveillance video stream into a plurality of video segments with the same duration in time sequence, and overlapping N frames of images between any two adjacent video segments comprises:
setting the duration of each video segment as T and the number of overlapped frames as N;
determining a starting time stamp and an ending time stamp of each video segment, wherein N frames of images are overlapped between the ending time stamp of the previous video segment and the starting time stamp of the next video segment;
and dividing the monitoring video stream into a plurality of sections of videos with the same time length according to the starting time stamp and the ending time stamp of each section of video.
4. The visual simulation method of real-time traffic data according to claim 1, wherein the obtaining of the vehicle information in each video segment stores the vehicle information of each vehicle in different arrays, each array records at least a start frame number when the vehicle appears and an end frame number when the vehicle disappears, and all the arrays of a video segment are stored in a data table to obtain a plurality of data tables, including:
calculating the running speed of the vehicle in the current frame image by using the Nth frame image before and the Nth frame image after the current frame image, and tracking the running track of the vehicle;
storing vehicle information of a vehicle into an array, each array at least comprising 8 groups of data, respectively: the starting frame number when the vehicle appears, the ending frame number when the vehicle disappears, the vehicle type, the vehicle color, the license plate number, the X-axis offset coordinate, the Y-axis offset coordinate and the running speed; the X-axis offset coordinate, the Y-axis offset coordinate and the running speed all comprise data of all frame images from a starting frame number to an ending frame number;
and obtaining a plurality of arrays, storing the arrays into a data table, and recording the total frame number of the data table.
5. Visual simulation of real-time traffic data according to claim 1The method is characterized in that the data table is traversed according to the time sequence in sequence, and when the data table is traversed to the data table DmWhen the data table D is read, the traversal is started by taking the (N + 1) th frame as the first framemThe vehicle information of (1), mapping data of each vehicle in the digital twin city, comprising:
data table D for traversing a video segment starting from the (N + 1) th frame imagemReading the current ergodic frame number;
when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
determining a coordinate position generated by the simulated vehicle in a digital city scene according to the offset coordinates of the array, and when the coordinate position is outside a preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
and when the number of the ending frames is traversed, destroying the simulated vehicles which normally disappear in the preset driving area.
6. The visual simulation method of real-time traffic data according to claim 5, wherein when the coordinate position is outside the preset driving area, the simulated vehicle is generated after correcting the coordinate position of the simulated vehicle, comprising:
if the coordinate position in the current frame image is outside the preset driving area, recalculating the offset coordinate according to the driving speed set by the previous frame image, and continuously generating the simulated vehicle at the coordinate position corresponding to the offset coordinate.
7. The visual simulation method of real-time traffic data according to claim 5, wherein the step of destroying the simulated vehicles which normally disappear from the preset driving area when the simulation vehicles traverse to the termination frame number comprises the following steps:
when traversing to the number of the termination frames, judging whether the simulated vehicle disappears outside a preset driving area or not;
if yes, destroying the simulated vehicle;
if not, recalculating the set running speed of the previous frame image to obtain an offset coordinate;
judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
and if not, continuing to generate the simulated vehicle at the offset coordinate.
8. The visual simulation method of real-time traffic data according to claim 1, wherein the data is processed through a data table DmThe last frame of (2), enter next data table Dm+1By data table Dm+1Starts traversal as the first frame, continues to map vehicles in the digital twin city, including:
when traversing to the data table Dm+1When the initial frame number of the first array is in the first group, the vehicle information in the first group is read, and whether the license plate number of the vehicle is in the same number with the data table D or not is judgedmThe license plate numbers of the simulated vehicles are the same;
if the simulation data are different, generating a new simulation vehicle, and transmitting the array to the new simulation vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
9. A visual simulation system of real-time traffic data, comprising:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of sections of videos with the same time length according to a time sequence, and overlapping N frames of images between any two adjacent sections of videos;
a storage unit: the method comprises the steps that vehicle information in each section of video is obtained, the vehicle information of each vehicle is stored in different arrays respectively, at least the initial frame number and the final frame number of the vehicle are recorded in each array, and all the arrays of the section of video are stored in one data table to obtain a plurality of data tables;
the method specifically comprises the following steps:
utilizing an AI recognition algorithm to divide a section of video into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, a first pixel coordinate of the vehicle in the current frame of image and an actual offset coordinate;
the calculation process of the first pixel coordinate and the actual offset coordinate of the vehicle in the current frame image comprises the following steps:
taking the upper left corner of a video picture as a coordinate origin;
selecting at least two feature points in the video picture as a reference point and a mark point, wherein the feature points have characteristics convenient to determine, and include but are not limited to a crossing point of two lane boundaries, a middle point or an end point of the lane boundaries, a connection point of a road sign and the ground or a connection point of a street lamp and the ground;
calculating the actual distance of the unit pixel on the x axis and the y axis by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
calculating the actual distances of the vehicle relative to the mark point on the x axis and the y axis by using the first pixel coordinate and the actual distances of the unit pixels on the x axis and the y axis to obtain an offset coordinate of the vehicle, wherein the offset coordinate of the vehicle is an actual coordinate with the mark point as an origin;
a video generation unit: sequentially traversing the data table according to the time sequence, and when the data table is traversed to the data table DmWhen the vehicle is in a twin city, traversing by taking the (N + 1) th frame as a first frame, reading the vehicle information in the data table, and mapping each vehicle in the digital twin city;
a video connection unit: when traversing to the data table DmThe last frame of (2), enter next data table Dm+1By data table Dm+1The (N + 1) th frame is used as the first frame to start traversing, the vehicles are continuously mapped in the digital twin city, and a continuous real-time traffic simulation video is generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110427414.2A CN112991742B (en) | 2021-04-21 | 2021-04-21 | Visual simulation method and system for real-time traffic data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110427414.2A CN112991742B (en) | 2021-04-21 | 2021-04-21 | Visual simulation method and system for real-time traffic data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112991742A CN112991742A (en) | 2021-06-18 |
CN112991742B true CN112991742B (en) | 2021-08-20 |
Family
ID=76341408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110427414.2A Active CN112991742B (en) | 2021-04-21 | 2021-04-21 | Visual simulation method and system for real-time traffic data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112991742B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114529875A (en) * | 2022-04-24 | 2022-05-24 | 浙江这里飞科技有限公司 | Method and device for detecting illegal parking vehicle, electronic equipment and storage medium |
CN114966695B (en) * | 2022-05-11 | 2023-11-14 | 南京慧尔视软件科技有限公司 | Digital twin image processing method, device, equipment and medium for radar |
CN114782588B (en) * | 2022-06-23 | 2022-09-27 | 四川见山科技有限责任公司 | Real-time drawing method and system for road names in digital twin city |
CN114780666B (en) * | 2022-06-23 | 2022-09-27 | 四川见山科技有限责任公司 | Road label optimization method and system in digital twin city |
CN114943940A (en) * | 2022-07-26 | 2022-08-26 | 山东金宇信息科技集团有限公司 | Method, equipment and storage medium for visually monitoring vehicles in tunnel |
CN116543586B (en) * | 2023-07-03 | 2023-09-08 | 深圳市视想科技有限公司 | Intelligent public transportation information display method and display equipment based on digital twinning |
CN117579761B (en) * | 2024-01-16 | 2024-03-26 | 苏州映赛智能科技有限公司 | Error display control method of road digital twin model |
Family Cites Families (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006295702A (en) * | 2005-04-13 | 2006-10-26 | Canon Inc | Image processing apparatus and method, image processing program, and storage medium |
CN100555355C (en) * | 2006-08-29 | 2009-10-28 | 亿阳信通股份有限公司 | The method and system that the passage rate of road traffic calculates and mates |
CN100538763C (en) * | 2007-02-12 | 2009-09-09 | 吉林大学 | Mixed traffic flow parameters detection method based on video |
US8063918B2 (en) * | 2008-05-28 | 2011-11-22 | Adobe Systems Incorporated | Method and apparatus for rendering images with and without radially symmetric distortions |
US7973276B2 (en) * | 2009-07-29 | 2011-07-05 | Ut-Battelle, Llc | Calibration method for video and radiation imagers |
CN101866429B (en) * | 2010-06-01 | 2012-09-05 | 中国科学院计算技术研究所 | Training method of multi-moving object action identification and multi-moving object action identification method |
CN101924953B (en) * | 2010-09-03 | 2012-11-14 | 南京农业大学 | Simple matching method based on datum point |
JP5070435B1 (en) * | 2011-07-01 | 2012-11-14 | 株式会社ベイビッグ | Three-dimensional relative coordinate measuring apparatus and method |
CN102307320B (en) * | 2011-08-11 | 2013-07-10 | 江苏亿通高科技股份有限公司 | Piracy tracing watermarking method applicable to streaming media environment |
CN102636378A (en) * | 2012-04-10 | 2012-08-15 | 无锡国盛精密模具有限公司 | Machine vision orientation-based biochip microarrayer and positioning method thereof |
CN103577412B (en) * | 2012-07-20 | 2017-02-08 | 永泰软件有限公司 | High-definition video based traffic incident frame tagging method |
CN103092924A (en) * | 2012-12-28 | 2013-05-08 | 深圳先进技术研究院 | Real-time data driven traffic data three-dimensional visualized analysis system and method |
CN105354548B (en) * | 2015-10-30 | 2018-10-26 | 武汉大学 | A kind of monitor video pedestrian recognition methods again based on ImageNet retrievals |
CN106204560B (en) * | 2016-07-02 | 2019-04-16 | 上海大学 | Colony picker automatic calibration method |
CN106604059B (en) * | 2016-12-28 | 2020-07-14 | 深圳Tcl新技术有限公司 | Data delivery method and system |
US10691847B2 (en) * | 2017-01-13 | 2020-06-23 | Sap Se | Real-time damage determination of an asset |
CN107526504B (en) * | 2017-08-10 | 2020-03-17 | 广州酷狗计算机科技有限公司 | Image display method and device, terminal and storage medium |
CN108694237A (en) * | 2018-05-11 | 2018-10-23 | 东峡大通(北京)管理咨询有限公司 | Handle method, equipment, visualization system and the user terminal of vehicle position data |
CN108805073A (en) * | 2018-06-06 | 2018-11-13 | 合肥嘉仕诚能源科技有限公司 | A kind of safety monitoring dynamic object optimization track lock method and system |
US10843689B2 (en) * | 2018-06-13 | 2020-11-24 | Toyota Jidosha Kabushiki Kaisha | Collision avoidance for a connected vehicle based on a digital behavioral twin |
CN109063587B (en) * | 2018-07-11 | 2021-02-26 | 北京大米科技有限公司 | Data processing method, storage medium and electronic device |
CN109359507B (en) * | 2018-08-24 | 2021-10-08 | 南京理工大学 | Method for quickly constructing workshop personnel digital twin model |
CN109147341B (en) * | 2018-09-14 | 2019-11-22 | 杭州数梦工场科技有限公司 | Violation vehicle detection method and device |
CN111275960A (en) * | 2018-12-05 | 2020-06-12 | 杭州海康威视系统技术有限公司 | Traffic road condition analysis method, system and camera |
CN109615862A (en) * | 2018-12-29 | 2019-04-12 | 南京市城市与交通规划设计研究院股份有限公司 | Road vehicle movement of traffic state parameter dynamic acquisition method and device |
CN109887027A (en) * | 2019-01-03 | 2019-06-14 | 杭州电子科技大学 | A kind of method for positioning mobile robot based on image |
JP7103241B2 (en) * | 2019-01-15 | 2022-07-20 | 株式会社デンソー | Theft vehicle tracking system |
US11132649B2 (en) * | 2019-01-18 | 2021-09-28 | Johnson Controls Tyco IP Holdings LLP | Smart parking lot system |
US10991217B2 (en) * | 2019-02-02 | 2021-04-27 | Delta Thermal, Inc. | System and methods for computerized safety and security |
CN110033479B (en) * | 2019-04-15 | 2023-10-27 | 四川九洲视讯科技有限责任公司 | Traffic flow parameter real-time detection method based on traffic monitoring video |
CN110111565A (en) * | 2019-04-18 | 2019-08-09 | 中国电子科技网络信息安全有限公司 | A kind of people's vehicle flowrate System and method for flowed down based on real-time video |
CN110060524A (en) * | 2019-04-30 | 2019-07-26 | 广东小天才科技有限公司 | The method and reading machine people that a kind of robot assisted is read |
US10922826B1 (en) * | 2019-08-07 | 2021-02-16 | Ford Global Technologies, Llc | Digital twin monitoring systems and methods |
CN110505464A (en) * | 2019-08-21 | 2019-11-26 | 佳都新太科技股份有限公司 | A kind of number twinned system, method and computer equipment |
CN110398233B (en) * | 2019-09-04 | 2021-06-08 | 浙江中光新能源科技有限公司 | Heliostat field coordinate mapping method based on unmanned aerial vehicle |
CN110826415A (en) * | 2019-10-11 | 2020-02-21 | 上海眼控科技股份有限公司 | Method and device for re-identifying vehicles in scene image |
CN110992425A (en) * | 2019-12-11 | 2020-04-10 | 北京大豪科技股份有限公司 | Image calibration method and device, electronic equipment and storage medium |
CN111400405B (en) * | 2020-03-30 | 2021-04-02 | 兰州交通大学 | Monitoring video data parallel processing system and method based on distribution |
CN111505916B (en) * | 2020-05-25 | 2023-06-09 | 江苏迪盛智能科技有限公司 | Laser direct imaging device |
CN111815710B (en) * | 2020-05-28 | 2024-01-23 | 北京易航远智科技有限公司 | Automatic calibration method for fish-eye camera |
CN111565303B (en) * | 2020-05-29 | 2021-12-14 | 广东省电子口岸管理有限公司 | Video monitoring method, system and readable storage medium based on fog calculation and deep learning |
CN111724485B (en) * | 2020-06-11 | 2024-06-07 | 浙江商汤科技开发有限公司 | Method, device, electronic equipment and storage medium for realizing virtual-real fusion |
CN111565286B (en) * | 2020-07-14 | 2020-10-23 | 之江实验室 | Video static background synthesis method and device, electronic equipment and storage medium |
CN111737887B (en) * | 2020-08-21 | 2020-11-24 | 北京理工大学 | Virtual stage based on parallel simulation |
CN112163348A (en) * | 2020-10-24 | 2021-01-01 | 腾讯科技(深圳)有限公司 | Method and device for detecting road abnormal road surface, simulation method, vehicle and medium |
KR102217870B1 (en) * | 2020-10-28 | 2021-02-19 | 주식회사 예향엔지니어링 | Traffic managing system using digital twin technology |
CN112378953A (en) * | 2020-11-25 | 2021-02-19 | 清华大学 | Observation device for liquid drops colliding with metal bottom plate and application of observation device |
CN112529856A (en) * | 2020-11-30 | 2021-03-19 | 华为技术有限公司 | Method for determining the position of an operating object, robot and automation system |
-
2021
- 2021-04-21 CN CN202110427414.2A patent/CN112991742B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112991742A (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112991742B (en) | Visual simulation method and system for real-time traffic data | |
CN112990114B (en) | Traffic data visualization simulation method and system based on AI identification | |
US10621495B1 (en) | Method for training and refining an artificial intelligence | |
US11941887B2 (en) | Scenario recreation through object detection and 3D visualization in a multi-sensor environment | |
US11893682B1 (en) | Method for rendering 2D and 3D data within a 3D virtual environment | |
CN109064755B (en) | Path identification method based on four-dimensional real-scene traffic simulation road condition perception management system | |
CN113033030B (en) | Congestion simulation method and system based on real road scene | |
CN109740420A (en) | Vehicle illegal recognition methods and Related product | |
KR20210052031A (en) | Deep Learning based Traffic Flow Analysis Method and System | |
US20220237919A1 (en) | Method, Apparatus, and Computing Device for Lane Recognition | |
CN108898839B (en) | Real-time dynamic traffic information data system and updating method thereof | |
CA3179005A1 (en) | Artificial intelligence and computer vision powered driving-performance assessment | |
JP7278414B2 (en) | Digital restoration method, apparatus and system for traffic roads | |
JP2019154027A (en) | Method and device for setting parameter for video monitoring system, and video monitoring system | |
Zipfl et al. | From traffic sensor data to semantic traffic descriptions: The test area autonomous driving baden-württemberg dataset (taf-bw dataset) | |
CN115762120A (en) | Holographic sensing early warning system based on whole highway section | |
US11727580B2 (en) | Method and system for gathering information of an object moving in an area of interest | |
CN108682154A (en) | Congestion in road detecting system based on the analysis of wagon flow state change deep learning | |
JP6405606B2 (en) | Image processing apparatus, image processing method, and image processing program | |
CN113066182A (en) | Information display method and device, electronic equipment and storage medium | |
CN116524718A (en) | Remote visual processing method and system for intersection data | |
CN115394089A (en) | Vehicle information fusion display method, sensorless passing system and storage medium | |
JP2023536692A (en) | AI-based monitoring of racetracks | |
CN114691760A (en) | OD track display method and device, computing equipment and storage medium | |
TWI455072B (en) | Vehicle-locating system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A Visual Simulation Method and System for Real Time Traffic Data Effective date of registration: 20231019 Granted publication date: 20210820 Pledgee: Chengdu Rural Commercial Bank Co.,Ltd. Tianfu New Area Branch Pledgor: SICHUAN JIANSHAN TECHNOLOGY CO.,LTD. Registration number: Y2023510000232 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |