CN117132178A - Scene application model construction method based on smart city - Google Patents

Scene application model construction method based on smart city Download PDF

Info

Publication number
CN117132178A
CN117132178A CN202311403879.XA CN202311403879A CN117132178A CN 117132178 A CN117132178 A CN 117132178A CN 202311403879 A CN202311403879 A CN 202311403879A CN 117132178 A CN117132178 A CN 117132178A
Authority
CN
China
Prior art keywords
scene
dimensional
scene application
data
urban
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311403879.XA
Other languages
Chinese (zh)
Other versions
CN117132178B (en
Inventor
卞恒沁
邹德文
胡小武
陈杰杰
刘梓翔
江永胜
邹超越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Guozhun Data Co ltd
Original Assignee
Nanjing Guozhun Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Guozhun Data Co ltd filed Critical Nanjing Guozhun Data Co ltd
Priority to CN202311403879.XA priority Critical patent/CN117132178B/en
Publication of CN117132178A publication Critical patent/CN117132178A/en
Application granted granted Critical
Publication of CN117132178B publication Critical patent/CN117132178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a scene application model construction method based on a smart city, and relates to the technical field of scene application model construction; according to the method, a plurality of urban areas are set, equipment groups are set for the urban areas, acquired data of the urban areas are obtained through the equipment groups, factor labels are set according to the states of objects in the acquired data, the acquired data of the sensors of the same kind are spliced according to time periods, acquired data of the sensors of different kinds are mutually mapped to establish a multidimensional dataset, high-dimensional feature points and low-dimensional feature points are established according to the factor labels of the objects in the multidimensional dataset, then scene application models of all the urban areas are established, then scene application models of all the urban areas are spliced to obtain the urban scene application models, then virtual layer scene prediction is carried out on cities through the urban scene application models, and whether the urban scene application models are accurate or not is verified through scene development results of real layers.

Description

Scene application model construction method based on smart city
Technical Field
The application relates to the field of scene application model construction, in particular to a scene application model construction method based on a smart city.
Background
The scene application model is a model constructed for specific scenes or specific application requirements, and is mainly used for solving problems in specific scenes or realizing specific application functions, and is currently applied to various industries, such as city management, substation management and the like due to the compatibility of the functions;
the existing scene application model construction method generally has the problems that a large amount of high-quality data are required for training and testing of the scene application model, however, the data in reality often have incomplete, inaccurate or deviation, the performance or effect of the established scene application model cannot be expected, and the prior art lacks a more accurate method for evaluating whether the scene application model is accurately constructed, so that the scene application model construction method based on the smart city is provided.
Disclosure of Invention
In order to solve the technical problems, the application aims to provide a scene application model construction method based on a smart city.
In order to achieve the above object, the present application provides the following technical solutions:
a scene application model construction method based on a smart city comprises the following steps:
dividing a city into a plurality of city areas with equal size, setting equipment groups for each city area, further obtaining collected data of each city area through the equipment groups, and setting factor labels according to object states in the collected data;
splicing the acquired data of the sensors of the same kind according to time periods, and mutually mapping the acquired data spliced by the sensors of different kinds to establish a multidimensional data set;
establishing high-dimensional feature points and low-dimensional feature points for factor labels of all objects in the multi-dimensional dataset of the urban area, and further establishing a scene application model of each urban area;
and fourthly, time synchronization is carried out on each urban area, then scene application models of each urban area are spliced to obtain urban scene application models, then virtual layer scene prediction is carried out on the cities through the urban scene application models, and whether the urban scene application models are accurate or not is verified through scene development results of the real layers.
Furthermore, the equipment group consists of a plurality of sensors, wireless signal devices and a management and control system, wherein the management and control system is provided with the same number as the urban area, is used for managing the acquired data of each sensor, and is provided with a time synchronization mechanism.
Further, the process of acquiring the acquired data of the urban area through the equipment group comprises the following steps:
setting a data acquisition period, and when the data acquisition period starts, performing time synchronization calibration among all the control systems, and after confirming that the time among all the control systems is consistent, sending data acquisition instructions to all the sensors of the equipment group where the control systems are located;
all sensors in the equipment group acquire data with the same time length of the data acquisition period in the urban area according to the acquisition instruction and send the data to the management and control system;
after the data acquisition period is finished, the management and control system arranges the acquired data of each sensor according to the serial numbers of the sensors, and simultaneously sets a static factor label and a dynamic factor label for the acquired data of each sensor, wherein the static factor label and the dynamic factor label are simultaneously provided with three sub-factor labels of a pedestrian factor label, a system factor label and an environment factor label.
Further, the splicing process of the acquired data of the same kind of sensor includes:
setting a time tangent point, and dividing acquired data corresponding to each sensor in the same data acquisition period into a plurality of data fragments by taking a time node as a unit according to the time tangent point;
meanwhile, a plurality of connection characteristic points are set for the factor labels of the objects at the edge positions of all the data fragments, so that the data fragments of the same kind of sensors in the same data acquisition period and at the same time node are classified;
matching and connecting the connection characteristic points of the data segments under the same classification to obtain complete data segments of relevant types of urban areas in corresponding time nodes;
and sequentially connecting the complete data fragments corresponding to the time nodes according to the time node sequence, so as to obtain the complete acquired data corresponding to the data acquisition period.
Further, the process of establishing the multi-dimensional dataset includes:
setting a three-dimensional coordinate axis, mapping the complete acquired data corresponding to each type of sensor into the three-dimensional coordinate axis, and setting a uniform time axis at the same time;
setting a fixed mapping label for an object with a static factor in the complete acquired data corresponding to each kind of sensor, and setting a mobile mapping label for an object with a dynamic factor;
carrying out local mapping overlapping on objects with fixed mapping labels in the complete acquired data corresponding to the sensors of each kind of three-dimensional coordinate axis time axis;
and (3) taking the time axis as a reference point, obtaining the motion vector of each object with the motion mapping label, and mapping the objects with the same motion vector in the complete acquired data corresponding to each kind of sensor to each other, thereby obtaining a multidimensional data set of the corresponding urban area in the corresponding data acquisition period.
Further, the process for establishing the scene application model comprises the following steps:
traversing all layers in the multi-dimensional dataset to completely collect the object with the factor label, marking the object with the dynamic factor label with high-dimensional characteristic points, and marking the object with the static factor label with low-dimensional characteristic points;
for an object with high-dimensional characteristic points, establishing a corresponding local scene model according to pedestrian factor labels, system factor labels or environment factor labels;
setting a plurality of moving characteristic points on a pedestrian or an object by taking a time axis on a multidimensional data set as a reference point, and further establishing a plurality of displacement track lines or deformation track lines corresponding to the pedestrian or the object in a data acquisition period;
establishing a three-dimensional image model of a pedestrian or an object with high-dimensional feature points and pedestrian factor labels, marking the moving feature points on corresponding positions of the three-dimensional image model, and establishing a moving three-dimensional model of the pedestrian or the object corresponding to the high-dimensional feature points and the pedestrian factor labels in a data acquisition period according to moving track lines or deformation track lines corresponding to the moving feature points;
meanwhile, according to the complete acquired data of each layer in the multi-dimensional data set, a mobile three-dimensional model of the corresponding type of the complete acquired data is established;
according to the hierarchical sequence of the complete acquired data of each layer in the multidimensional dataset, performing superposition mapping on the three-dimensional model corresponding to the complete acquired data of each layer, and further obtaining a local scene model corresponding to a pedestrian or an object, and obtaining a local scene model of each object with high-dimensional characteristic points;
for an object with low-dimensional characteristic points, performing superposition mapping on a three-dimensional model corresponding to the complete acquired data of each layer to obtain a local scene model corresponding to the object with the low-dimensional characteristic points;
sequentially connecting the local scene models with the low-dimensional characteristic points according to the multi-dimensional data set, so as to obtain a static scene model corresponding to the urban area;
and mapping the moving track with the high-dimensional characteristic points into a static scene model to further obtain a scene application model corresponding to the urban area.
Further, the time synchronization process of the urban area includes:
setting a cloud computing platform, and after the scene application model of the urban area is built, sending clock signals to the cloud computing platform by the management and control systems of all the urban areas, wherein the clock signals comprise the serial numbers of the management and control systems and the system time;
after the cloud computing platform confirms to receive the clock signals of all the urban areas according to the number and the serial numbers of the clock signals, the cloud computing platform uses the system time of the cloud computing platform as standard time and compares the system time with the system time of each clock signal, and synchronously compares the system time of each clock signal by majority decision.
Further, the process of establishing the city scene application model and verifying whether the city scene application model is accurate comprises the following steps:
after time synchronization is carried out on the management and control system of each urban area, the scene application models are sent to the cloud computing platform, and the scene application models are sequentially connected according to the urban area numbers corresponding to the scene application models, so that the urban scene application models are obtained;
taking a data acquisition period as a time unit, further generating a predicted moving process of each object with high-dimensional characteristics in the next data acquisition period of the virtual layer according to the moving process of the object with high-dimensional characteristics in the urban scene application model, further obtaining a virtual layer prediction development result of each urban area, and simultaneously sending a data acquisition instruction to each sensor by a management and control system, and further generating a real layer scene development result;
comparing the virtual layer prediction development results of each urban area with the movement or deformation process of each object with high-dimensional characteristic points in the real layer scene development results by taking the urban area as a unit, and counting the number of the objects with consistent movement or deformation processes;
setting an accurate threshold according to the total number of objects with high-dimensional characteristic points in each urban area, and judging that the scene application model of the corresponding urban area is accurate if the number of the objects consistent in the moving or deforming process is greater than or equal to the accurate threshold;
if the number of the objects with consistent moving or deformation processes is smaller than an accurate threshold, judging that the scene application model corresponding to the urban area is inaccurate, and reestablishing the scene application model according to the acquired data of the corresponding urban area in the next data acquisition period;
and when judging that the scene application models of all the urban areas are accurate in verification, predicting scene prediction of the next data acquisition period through the corresponding urban scene application models.
Compared with the prior art, the application has the beneficial effects that:
1. according to the method, a plurality of urban areas are set, equipment groups are arranged on the urban areas, acquired data of the urban areas are obtained through the equipment groups, factor labels are set according to the object states in the acquired data, the acquired data of the sensors of the same kind are spliced according to time periods, acquired data after the sensors of different kinds are spliced are mapped with each other to establish a multidimensional data set, high-dimensional characteristic points and low-dimensional characteristic points are established according to the factor labels of the objects in the multidimensional data set, and then a scene application model of each urban area is established; the multi-source data of the same object are acquired from a plurality of acquisition angles through different sensors, and multi-dimensional mapping superposition is carried out through the multi-source data, so that the probability of error occurrence of a data source is effectively reduced, and the credibility of a scene application model is improved;
2. according to the method, the urban scene application models of all urban areas are spliced to obtain the urban scene application models, virtual layer scene prediction is carried out on cities through the urban scene application models, and whether the urban scene application models are accurate or not is verified through scene development results of real layers; by combining the prediction result of the virtual layer and the development result of the real layer to verify the accuracy of the scene application model, the error rate of the scene application model due to the fact that the application scene is undefined and the modeling method is undefined is effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for those skilled in the art.
FIG. 1 is a flow chart of the method of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, based on the examples herein, which are within the scope of the application as defined by the claims, will be within the scope of the application as defined by the claims.
As shown in fig. 1, a method for constructing a scene application model based on a smart city includes the following steps:
dividing a city into a plurality of city areas with equal size, setting equipment groups for each city area, further obtaining collected data of each city area through the equipment groups, and setting factor labels according to object states in the collected data;
specifically, a city map is obtained, and an equal proportion of city electronic figures are generated by the city map, so that a city is divided into a plurality of equal city areas according to the city electronic figures, and numbers are set for each city area, for example, the numbers can be S 1 、S 2 、……、S n N is a natural number greater than 0;
setting equipment groups for each urban area, wherein the equipment groups consist of a plurality of sensors, wireless signal devices and management and control systems, the sensors can be cameras, infrared sensors and the like, the management and control systems are provided with the same numbers as the urban areas where the sensors are located, the management and control systems are used for managing the acquired data of each sensor, and a time synchronization mechanism is arranged among each management and control system;
setting a data acquisition period, and when the data acquisition period starts, performing time synchronization calibration among all the control systems, and after confirming that the time among all the control systems is consistent, sending data acquisition instructions to all the sensors of the equipment group where the control systems are located;
all sensors in the equipment group acquire data with the same time length of the data acquisition period in the urban area according to the acquisition instruction and send the data to the management and control system;
the control system sets a number s for each sensor i,j For example s 1,1 、s 1,2 … …, i and j are natural numbers larger than 0, i is not more than n, the acquisition areas of the sensors of the same type are sequentially connected with each other or partially overlapped, and the sensors of the same type are not overlapped at the same position;
after the data acquisition period is finished, the management and control system arranges the acquired data of each sensor according to the serial numbers of the sensors, and simultaneously sets a static factor label and a dynamic factor label for the acquired data of each sensor, wherein the static factor label and the dynamic factor label are provided with three sub-factor labels of a pedestrian factor label, a system factor label and an environment factor label;
the dynamic factor label represents an object which is subjected to displacement or deformation in the acquired data;
the static factor label represents an object which is not displaced or deformed in the acquired data;
the pedestrian factor label represents a part which represents pedestrians or is in a direct association state with pedestrians in the collected data, such as people and automobiles;
the system factor label represents a sensor part in a device group in the acquired data;
the environmental factor label represents objects such as a green belt, a building, a street lamp and the like of a city in the collected data;
it should be noted that, in one data acquisition period, the same object may have different tag factors in each time period, and each object may only have a static factor tag or a dynamic factor tag on any time node, which may not exist simultaneously.
Splicing the acquired data of the sensors of the same kind according to time periods, and mutually mapping the acquired data spliced by the sensors of different kinds to establish a multidimensional data set;
specifically, a time tangent point is set, and acquired data corresponding to each sensor in the same data acquisition period is divided into a plurality of data segments by taking a time node as a unit according to the time tangent point;
meanwhile, a plurality of connection characteristic points are set for the factor labels of the objects at the edge positions of all the data fragments, so that the data fragments of the same kind of sensors in the same data acquisition period and at the same time node are classified;
carrying out matching connection on the connection characteristic points of the data segments in the same classification, and further obtaining complete data segments of relevant types of urban areas in the corresponding time nodes;
according to the time node sequence, sequentially connecting the complete data segments corresponding to each time node to obtain complete acquisition data corresponding to the data acquisition period;
the same method is adopted to obtain the complete acquisition data of each kind of sensor in the same data acquisition period, and the number is set according to the sensor kind pair corresponding to the complete acquisition data, for example, the number can be a 1,1 、a 2 、……、a n,m ,a n,m The representation number is S n The complete acquisition data of the m-th sensor of the urban area, m is a natural number larger than 0;
the following describes the principle process of acquiring the complete acquired data of each type of sensor by acquiring the complete acquired data of the camera:
the management and control system extracts video data of the urban area acquired by the camera according to the types of the sensors, divides the video data into a plurality of image data through setting time cut points after the factor labels are set on the video data, and sets connection characteristic points according to the factor labels carried by objects at the edge positions of the image data;
the connecting feature points consist of feature pixel segments and factor labels of the object in which the connecting feature points are positioned, for example, the connecting feature points are provided with dynamic factor labels and pedestrian factor labels, and the feature pixel segments can be pedestrian head portrait pixel segments;
all image data of the same time node are subjected to mutual matching mapping connection through the connection feature points, and the image areas among the feature connection points are automatically deleted, so that complete image data of the urban areas corresponding to the time node are obtained;
the same method is adopted to obtain complete image data of all time nodes in the same data acquisition period time, and all the complete image data are sequentially connected according to the time node sequence to obtain complete video data;
further, setting a three-dimensional coordinate axis, mapping the complete acquired data corresponding to each type of sensor into the three-dimensional coordinate axis, setting four terminal coordinate points on each finished acquired data according to the size of the urban area, and setting a unified time axis;
setting a fixed mapping label for an object with a static factor label in the whole acquired data corresponding to each kind of sensor, and setting a movable mapping label for an object with a dynamic factor;
firstly, carrying out local mapping overlapping on objects with fixed mapping labels in the whole acquired data corresponding to each kind of sensors of a three-dimensional coordinate axis time axis;
and taking the time axis as a reference point, acquiring the motion vector of each object with the motion mapping label, and mapping the objects with the same motion vector in the complete acquired data corresponding to each kind of sensor to each other, thereby obtaining a multidimensional data set of the corresponding urban area in the corresponding data acquisition period.
Establishing high-dimensional feature points and low-dimensional feature points for factor labels of all objects in the multi-dimensional dataset of the urban area, and further establishing a scene application model of each urban area;
specifically, traversing all layers in the multi-dimensional dataset to completely collect the object with the factor label, marking the object with the dynamic factor label with high-dimensional characteristic points, and marking the object with the static factor label with low-dimensional characteristic points;
for an object with high-dimensional characteristic points, establishing a corresponding local scene model according to pedestrian factor labels, system factor labels or environment factor labels;
taking a pedestrian or an object with high-dimensional characteristic points and pedestrian factor labels as an example, taking a time axis on a multi-dimensional data set as a reference point, setting a plurality of moving characteristic points on the pedestrian or the object, and further establishing a plurality of displacement track lines or deformation track lines corresponding to the pedestrian or the object in a data acquisition period;
establishing a three-dimensional image model of a pedestrian or an object with high-dimensional feature points and pedestrian factor labels, marking the moving feature points on corresponding positions of the three-dimensional image model, and establishing a moving three-dimensional model of the pedestrian or the object corresponding to the high-dimensional feature points and the pedestrian factor labels in a data acquisition period according to moving track lines or deformation track lines corresponding to the moving feature points;
meanwhile, according to the complete acquired data of each layer in the multi-dimensional data set, a mobile three-dimensional model of the corresponding type of the complete acquired data, such as pedestrian thermal imaging data acquired by an infrared sensor, is established, and then the mobile three-dimensional thermal imaging model of the pedestrian is established;
according to the hierarchical sequence of the complete acquired data of each layer in the multidimensional dataset, performing superposition mapping on the three-dimensional model corresponding to the complete acquired data of each layer, and further obtaining a local scene model corresponding to the pedestrian or the object;
and so on, obtaining a local scene model of each object with high-dimensional characteristic points;
for an object with low-dimensional characteristic points, performing superposition mapping on a three-dimensional model corresponding to the complete acquired data of each layer to obtain a local scene model corresponding to the object with the low-dimensional characteristic points;
further, the local scene models with the low-dimensional characteristic points are sequentially connected, so that static scene models corresponding to the urban areas are obtained;
and mapping each moving track with the high-dimensional characteristic points into a static scene model at a corresponding position, so as to obtain a scene application model of the corresponding urban area.
Step four, time synchronization is carried out on each urban area, then scene application models of each urban area are spliced to obtain urban scene application models, then virtual layer scene prediction is carried out on cities through the urban scene application models, and whether the urban scene application models are accurate or not is verified through scene development results of real layers;
specifically, a cloud computing platform is set, and after the establishment of a scene application model of a city area is completed, a management and control system of each city area sends clock signals to the cloud computing platform, wherein the clock signals comprise the serial numbers of the management and control systems and the system time;
after the cloud computing platform confirms to receive the clock signals of all the urban areas according to the number and the serial numbers of the clock signals, the cloud computing platform uses the system time of the cloud computing platform as standard time and compares the system time with the system time in each clock signal, and synchronously compares the system time in each clock signal by adopting majority decision;
if the absolute value of the time difference between the standard time and the system time is equal to (0,0.001)]If the standard time is consistent with the system time, the corresponding clock signal number Num consistent with the standard time and the system time is counted 1
If the absolute value of the time difference between the standard time and the system time is not (0,0.001)]If the standard time is inconsistent with the system time, the corresponding clock signal number Num inconsistent with the standard time and the system time is counted 2
Meanwhile, the same method is adopted, each system time is compared, and clock signals with consistent system time are classified and counted;
if Num 1 ≥Num 2 Generating a clock signal according to the standard time and sending the clock signal to the control systems, and then updating the system time of each control system according to the received clock signal;
if Num 1 <Num 2 Selecting the system time with the largest number of clock signals with consistent system time as standard time to generate, sending the clock signals to the control systems, and then updating the system time of each control system according to the received clock signals, and simultaneously updating the system time of each cloud computing platform according to the new standard time;
further, after time synchronization is carried out on the management and control system of each urban area, the scene application model is sent to the cloud computing platform, and then the cloud computing platform sequentially connects the scene application models according to the urban area numbers corresponding to each scene application model to obtain the urban scene application model;
taking the data acquisition period as a time unit, generating the predicted moving process of each object with high-dimensional characteristics in the next data acquisition period of the virtual layer according to the moving process of the object with high-dimensional characteristics in the urban scene application model, and further obtaining the predicted development result of the virtual layer of each urban area;
meanwhile, the management and control system sends data acquisition instructions to each sensor, and a real layer scene development result is generated by adopting the methods of the second step and the third step;
comparing the virtual layer prediction development results of each urban area with the movement or deformation process of each object with high-dimensional characteristic points in the real layer scene development results by taking the urban area as a unit, and counting the number of the objects with consistent movement or deformation processes;
setting an accurate threshold according to the total number of objects with high-dimensional characteristic points in each urban area, and judging that the scene application model of the corresponding urban area is accurate if the number of the objects consistent in the moving or deforming process is greater than or equal to the accurate threshold;
if the number of the objects with consistent moving or deformation processes is smaller than an accurate threshold, judging that the scene application model corresponding to the urban area is inaccurate, and reestablishing the scene application model according to the acquired data of the corresponding urban area in the next data acquisition period;
and when judging that the scene application models of all the urban areas are accurate in verification, predicting scene prediction of the next data acquisition period through the corresponding urban scene application models.
The above embodiments are only for illustrating the technical method of the present application and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present application may be modified or substituted without departing from the spirit and scope of the technical method of the present application.

Claims (8)

1. A scene application model construction method based on a smart city is characterized by comprising the following steps:
dividing a city into a plurality of city areas with equal size, setting equipment groups for each city area, further obtaining collected data of each city area through the equipment groups, and setting factor labels according to object states in the collected data;
splicing the acquired data of the sensors of the same kind according to time periods, and mutually mapping the acquired data spliced by the sensors of different kinds to establish a multidimensional data set;
establishing high-dimensional feature points and low-dimensional feature points for factor labels of all objects in the multi-dimensional dataset of the urban area, and further establishing a scene application model of each urban area;
and fourthly, time synchronization is carried out on each urban area, then scene application models of each urban area are spliced to obtain urban scene application models, then virtual layer scene prediction is carried out on the cities through the urban scene application models, and whether the urban scene application models are accurate or not is verified through scene development results of the real layers.
2. The method for constructing the scene application model based on the smart city according to claim 1, wherein the equipment group consists of a plurality of sensors, wireless signal devices and management and control systems, the management and control systems are provided with the same numbers in the urban areas where the equipment group is located, the management and control systems are used for managing the acquired data of all the sensors, and a time synchronization mechanism is arranged among all the management and control systems.
3. The smart city-based scene application model construction method as claimed in claim 2, wherein the process of acquiring the acquired data of the city area through the device group comprises:
setting a data acquisition period, and further carrying out time synchronization calibration among all the control systems when the data acquisition period starts, and sending data acquisition instructions to all the sensors of the equipment group where the control systems are located after confirming that the time among all the control systems is consistent;
and after the data acquisition period is finished, the management and control system sets static factor and dynamic factor labels for the acquired data of each sensor, wherein the static factor labels and the dynamic factor labels are simultaneously provided with three sub-factor labels, namely a pedestrian factor label, a system factor label and an environment factor label.
4. A method for constructing a smart city based scene application model as claimed in claim 3, wherein the process of splicing the collected data of the same kind of sensor comprises:
setting a time tangent point, and dividing acquired data of the same kind of sensors in the same data acquisition period into a plurality of data fragments by taking a time node as a unit according to the time tangent point;
meanwhile, setting a plurality of connection characteristic points for the factor labels of the objects at the edge positions of all the data segments, and classifying the data segments of the same time node;
matching and connecting the connection characteristic points of the data segments under the same classification to obtain complete data segments of relevant types of urban areas in corresponding time nodes;
and sequentially connecting the complete data fragments corresponding to the time nodes according to the time node sequence, so as to obtain the complete acquired data corresponding to the data acquisition period.
5. The smart city-based scene application model building method of claim 4, wherein said multi-dimensional dataset building process comprises:
setting a three-dimensional coordinate axis, mapping the complete acquired data corresponding to each type of sensor into the three-dimensional coordinate axis, and setting a uniform time axis;
setting a fixed mapping label for an object with a static factor in the complete acquired data corresponding to each kind of sensor, and setting a mobile mapping label for an object with a dynamic factor;
carrying out local mapping overlapping on objects with fixed mapping labels in the complete acquired data corresponding to the sensors of each kind of three-dimensional coordinate axis time axis;
and (3) taking the time axis as a reference point, obtaining the motion vector of each object with the motion mapping label, and mapping the objects with the same motion vector in the complete acquired data corresponding to each kind of sensor to each other, thereby obtaining a multidimensional data set of the corresponding urban area in the corresponding data acquisition period.
6. The smart city-based scene application model building method as claimed in claim 5, wherein the scene application model building process comprises:
traversing all layers in the multi-dimensional dataset to completely collect the object with the factor label, marking the object with the dynamic factor label with high-dimensional characteristic points, and marking the object with the static factor label with low-dimensional characteristic points;
for an object with high-dimensional characteristic points, establishing a corresponding local scene model according to pedestrian factor labels, system factor labels or environment factor labels;
setting a plurality of moving characteristic points on an object by taking a time axis on a multidimensional data set as a reference point, and further establishing a plurality of displacement track lines or deformation track lines of the corresponding object in a data acquisition period;
establishing a three-dimensional image model of an object, marking the moving characteristic points on corresponding positions of the three-dimensional image model, and establishing a moving three-dimensional model of the corresponding object in a data acquisition period according to moving track lines or deformation track lines corresponding to the moving characteristic points;
meanwhile, according to the complete acquired data of each layer in the multi-dimensional data set, a mobile three-dimensional model of the corresponding type of the complete acquired data is established;
according to the hierarchical sequence of the complete acquired data of each layer in the multidimensional dataset, performing superposition mapping on the three-dimensional model corresponding to the complete acquired data of each layer, and further obtaining a local scene model of a corresponding object, and obtaining a local scene model of each object with high-dimensional characteristic points;
for an object with low-dimensional characteristic points, performing superposition mapping on a three-dimensional model corresponding to the complete acquired data of each layer to obtain a local scene model corresponding to the object with the low-dimensional characteristic points;
and sequentially connecting the local scene models with the low-dimensional characteristic points in the multi-dimensional dataset to obtain a static scene model corresponding to the urban area, and mapping the moving track of the object with the high-dimensional characteristic points into the static scene model to obtain a scene application model corresponding to the urban area.
7. The smart city-based scene application model building method as recited in claim 6, wherein the process of time synchronizing between the city areas comprises:
setting a cloud computing platform, and after the scene application model of the urban area is built, sending clock signals to the cloud computing platform by the management and control systems of all the urban areas, wherein the clock signals comprise the serial numbers of the management and control systems and the system time;
after the cloud computing platform confirms to receive the clock signals of all the urban areas according to the number and the serial numbers of the clock signals, the cloud computing platform uses the system time of the cloud computing platform as standard time and compares the system time with the system time of each clock signal, and synchronously compares the system time of each clock signal by majority decision.
8. The smart city-based scene application model building method as recited in claim 7, wherein the process of building the city scene application model and verifying whether the city scene application model is accurate comprises:
after time synchronization is carried out on the management and control system of each urban area, the scene application models are sent to the cloud computing platform, and the scene application models are sequentially connected according to the urban area numbers corresponding to the scene application models, so that the urban scene application models are obtained;
taking a data acquisition period as a time unit, further generating a predicted moving process of each object with high-dimensional characteristics in the next data acquisition period of the virtual layer according to the moving process of the object with high-dimensional characteristics in the urban scene application model, further obtaining a virtual layer prediction development result of each urban area, and simultaneously sending a data acquisition instruction to each sensor by a management and control system, and further generating a real layer scene development result;
comparing the virtual layer prediction development results of each urban area with the movement or deformation process of each object with high-dimensional characteristic points in the real layer scene development results by taking the urban area as a unit, and counting the number of the objects with consistent movement or deformation processes;
and setting an accurate threshold according to the total number of objects with high-dimensional characteristic points in each urban area, comparing the number of objects consistent with the moving or deforming process with the accurate threshold, and judging whether the scene application model of the urban area is accurate or not according to the comparison result.
CN202311403879.XA 2023-10-27 2023-10-27 Scene application model construction method based on smart city Active CN117132178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311403879.XA CN117132178B (en) 2023-10-27 2023-10-27 Scene application model construction method based on smart city

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311403879.XA CN117132178B (en) 2023-10-27 2023-10-27 Scene application model construction method based on smart city

Publications (2)

Publication Number Publication Date
CN117132178A true CN117132178A (en) 2023-11-28
CN117132178B CN117132178B (en) 2023-12-29

Family

ID=88853091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311403879.XA Active CN117132178B (en) 2023-10-27 2023-10-27 Scene application model construction method based on smart city

Country Status (1)

Country Link
CN (1) CN117132178B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067062A (en) * 2022-01-17 2022-02-18 深圳慧拓无限科技有限公司 Method and system for simulating real driving scene, electronic equipment and storage medium
CN114863054A (en) * 2022-05-30 2022-08-05 山西省城市规划和发展研究有限公司 Smart city planning system based on simulation dynamic technology
CN114897444A (en) * 2022-07-12 2022-08-12 苏州大学 Method and system for identifying service facility requirements in urban subarea

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067062A (en) * 2022-01-17 2022-02-18 深圳慧拓无限科技有限公司 Method and system for simulating real driving scene, electronic equipment and storage medium
CN114863054A (en) * 2022-05-30 2022-08-05 山西省城市规划和发展研究有限公司 Smart city planning system based on simulation dynamic technology
CN114897444A (en) * 2022-07-12 2022-08-12 苏州大学 Method and system for identifying service facility requirements in urban subarea

Also Published As

Publication number Publication date
CN117132178B (en) 2023-12-29

Similar Documents

Publication Publication Date Title
US10832478B2 (en) Method and system for virtual sensor data generation with depth ground truth annotation
EP3211596A1 (en) Generating a virtual world to assess real-world video analysis performance
CN108256439A (en) A kind of pedestrian image generation method and system based on cycle production confrontation network
CN108234927A (en) Video frequency tracking method and system
CN112396000B (en) Method for constructing multi-mode dense prediction depth information transmission model
CN116484971A (en) Automatic driving perception self-learning method and device for vehicle and electronic equipment
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN116453121B (en) Training method and device for lane line recognition model
WO2023231991A1 (en) Traffic signal lamp sensing method and apparatus, and device and storage medium
CN114758337A (en) Semantic instance reconstruction method, device, equipment and medium
Shi et al. An integrated traffic and vehicle co-simulation testing framework for connected and autonomous vehicles
WO2023016182A1 (en) Pose determination method and apparatus, electronic device, and readable storage medium
WO2021146906A1 (en) Test scenario simulation method and apparatus, computer device, and storage medium
CN115147644A (en) Method, system, device and storage medium for training and describing image description model
WO2007147171A2 (en) Scalable clustered camera system and method for multiple object tracking
CN112597996B (en) Method for detecting traffic sign significance in natural scene based on task driving
CN117132178B (en) Scene application model construction method based on smart city
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
CN103903269B (en) The description method and system of ball machine monitor video
CN110148205A (en) A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN115147549A (en) Urban three-dimensional model generation and updating method based on multi-source data fusion
CN112766068A (en) Vehicle detection method and system based on gridding labeling
Dai Semantic Detection of Vehicle Violation Video Based on Computer 3D Vision
CN115578246B (en) Non-aligned visible light and infrared mode fusion target detection method based on style migration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant