CN112270296A - Cloud platform based smart city visual management system and method - Google Patents

Cloud platform based smart city visual management system and method Download PDF

Info

Publication number
CN112270296A
CN112270296A CN202011269104.4A CN202011269104A CN112270296A CN 112270296 A CN112270296 A CN 112270296A CN 202011269104 A CN202011269104 A CN 202011269104A CN 112270296 A CN112270296 A CN 112270296A
Authority
CN
China
Prior art keywords
event
cloud platform
solution
image information
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011269104.4A
Other languages
Chinese (zh)
Inventor
胡浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingyi Technology Co ltd
Original Assignee
Beijing Jingyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingyi Technology Co ltd filed Critical Beijing Jingyi Technology Co ltd
Priority to CN202011269104.4A priority Critical patent/CN112270296A/en
Publication of CN112270296A publication Critical patent/CN112270296A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses a smart city visual management system based on a cloud platform, which comprises an event acquisition unit and a visual unit of a client, and a cloud platform smart center, a cloud platform data center and an artificial compensation unit of a server. According to the invention, urban events are shot into event image information by the event generator user and transmitted to the cloud platform smart center, the cloud platform smart center calls the solution video for processing the events related in the event image information from the cloud platform data center according to the event image information in a matching manner and displays the solution video to the event generator user, so that the event generator can automatically process the events according to the solution in the solution video, the autonomy of the basic people as the event generator is fully mobilized, the workload of the functional department is reduced greatly, the optimal event processing time is avoided being consumed in the process of feeding back the functional department, and the event processing efficiency is improved.

Description

Cloud platform based smart city visual management system and method
Technical Field
The invention relates to the technical field of smart cities, in particular to a cloud platform-based smart city visual management system and method.
Background
The smart city realizes comprehensive and thorough perception, broadband ubiquitous interconnection and intelligent integration application and sustainable innovation characterized by user innovation, open innovation, public innovation and collaborative innovation through the application of new-generation information technologies such as internet of things infrastructure, cloud computing infrastructure, geospatial infrastructure and the like and tools and methods such as wiki, social network, network-based full-media integration communication terminal and the like. Along with the rise of network empire states, the fusion development of mobile technologies and the innovative democratization process, the smart city in the knowledge social environment is a high-grade form of the development of the informatization city after the digital city.
Patent CN104615701B discloses an embedded big data visualization engine cluster of a smart city based on a video cloud platform and a method thereof, wherein the big data visualization deep learning and decision engine is used for making a decision according to the obtained industry stream characteristics, event stream characteristics and an optimal tool set to obtain a decision result, and the big data visualization driving engine is used for transmitting the decision result to a large-screen display driving system for presentation, so that the visualization level of big data is improved.
Although CN104615701B can solve the level of visualization of smart cities, in the management process of smart cities, city event decisions need to be made through industry flow features, event flow features and preferred tool sets, the whole process is mainly biased to be used by management departments, the primary people are indispensable for smart cities, and a part of smart cities with the highest frequency is used, the autonomy of the primary people is not fully mobilized, and the emergency of cities depends on the feedback functional department, so that the workload of the functional department is huge, and the optimal time for event processing is consumed in the feedback process, resulting in low event processing efficiency.
Disclosure of Invention
The invention aims to provide a smart city visual management system based on a cloud platform, and aims to solve the technical problems that the autonomy of basic-level people is not fully mobilized, the emergency of a city completely depends on a feedback functional department, so that the workload of the functional department is huge, and the optimal time for processing an event is consumed in the feedback process, so that the event processing efficiency is low in the prior art.
In order to solve the technical problems, the invention specifically provides the following technical scheme:
a smart city visual management system based on a cloud platform comprises an event acquisition unit and a visual unit of a user side, a cloud platform smart center, a cloud platform data center and a manual compensation unit of a server side;
the event acquisition unit is used for acquiring event image information of an event generator user and uploading the event image information to the cloud platform intelligent center;
the cloud platform intelligent center is used for receiving the event image information from the event acquisition unit, extracting event features of the event image information, calling a solution video for processing the event related to the event image information in the cloud platform data center according to the event features of the event image information, transmitting the solution video to the visualization unit, and sending a manual solution application to the manual compensation unit if any solution video for processing the event related to the event image information cannot be called in the cloud platform data center;
the system comprises a visualization unit, a service unit and an event generator, wherein the visualization unit is used for receiving a solution video from a cloud platform smart center and displaying the solution video to an event generator user;
the system comprises an artificial compensation unit, a cloud platform data center and a cloud platform intelligent center, wherein the artificial compensation unit is used for receiving an artificial solution application from the cloud platform intelligent center and carrying out artificial solution, shooting a solution video in the whole process of the artificial solution process, and uploading the solution video to the cloud platform data center for storage after an event is solved;
and the cloud platform data center is used for storing a plurality of solution videos aiming at various events and providing storage and reading functions outwards.
As a preferred scheme of the present invention, the specific steps of the cloud platform smart center performing event feature extraction on event image information are as follows:
a1, processing the event image information, extracting the original event characteristic vector related in the video information;
a2, performing principal component analysis on the original event feature vector by using a principal component analysis method to obtain a final event feature vector after dimension reduction;
as a preferred aspect of the present invention, in a1, the extracting the original event feature vector by using an image processing method specifically includes:
a101, performing specification cutting on event image information to be processed by using a frame shape with a fixed specification, and obtaining a pixel image matrix which is contained in the cut event image information and marked as [ i j ], wherein i j is n, i is an image matrix row, j is an image matrix column, and n is the total number of pixels;
a102, decomposing pixels arranged in a matrix in a pixel image matrix [ i x j ] into a vector form, wherein the vector form is marked as {1 x j; 2 x j; 3 x j; …, respectively; i x j };
a103, converting {1 × j; 2 x j; 3 x j; …, respectively; i j is stored as the original event feature vector describing the event image information.
As a preferred embodiment of the present invention, in a2, the principal component analysis method performs principal component analysis on the original event feature vector to obtain a final event feature vector, and includes the specific steps of:
a201, sequentially converting original event feature vectors {1 xj; 2 x j; 3 x j; …, respectively; subtracting the row average value from each row in i x j } to perform decentralized processing;
a202, performing covariance matrix calculation on the original event feature vector subjected to the decentralized processing to obtain an original event feature covariance matrix mark [ j x j ];
a203, calculating eigenvalues and eigenvectors of the covariance matrix [ j x j ] of the original event features through SVD, and respectively obtaining j eigenvalues and eigenvectors;
a204, sequencing j eigenvalues from large to small, and selecting p corresponding eigenvectors with the largest eigenvalue to respectively form a projection matrix [ j × p ];
a205, original event feature vectors {1 xj; 2 x j; 3 x j; …, respectively; multiplying the i by j by a projection matrix [ j by p ] and reducing dimensions to obtain a final event feature vector mark as {1 by p; 2 x p; 3 x p; …, respectively; i x p }.
As a preferred scheme of the invention, the solution video stored in the cloud platform data center is sequentially subjected to image processing and principal component analysis and finally stored in a feature vector form, and the processed image is extractedIs marked as { [ i ] i1*j1],[i2*j2],[i3*j3],…,[im*jm]Where m is the total number of solution videos, im*jm=nmFor the m-th video image pixel total number, imFor the m-th video image matrix row, jmFor the m-th video image matrix column, [ im*jm]Decomposition into vector form {1 x jm,2*jm,3*jm,…,im*jm}。
As a preferred embodiment of the present invention, the method for analyzing the principal components of the solution video comprises the following specific steps:
b1, sequentially converting the original feature matrix vector of the solution video { [ i { [ I ]1*j1],[i2*j2],[i3*j3],…,[im*jm]Performing decentralized processing on each item in the tree;
b2, respectively carrying out covariance matrix calculation on each item of the de-centralized solution video original characteristic matrix vector to obtain a solution video original characteristic covariance matrix label [ j1*j1],[j2*j2],[j3*j3],…,[jm*jm];
B3, respectively carrying out SVD on solution video original feature covariance matrix [ j1*j1],[j2*j2],[j3*j3],…,[jm*jm]Each item of (a) is subjected to characteristic value and characteristic vector calculation to respectively obtain j1,j2,j3,…,jmEach eigenvalue and eigenvector;
b4, j respectively1,j2,j3,…,jmSorting each item in the characteristic values from large to small, and selecting p corresponding characteristic vectors with the largest characteristic values to respectively form a projection matrix [ j1*p],[j2*p],[j3*p],…,[jm*p];
B5, original characteristics of solution videoMatrix vector { [ i ]1*j1],[i2*j2],[i3*j3],…,[im*jm]And the projection matrix [ j ]1*p],[j2*p],[j3*p],…,[jm*p]The corresponding multiplication dimensionality reduction obtains the stored solution video feature vector mark as { [ i { [ I ]1*p],[i2*p],[i3*p],…,[im*p]In which [ im*p]={1*p;2*p;3*p;…;im*p}。
As a preferred scheme of the present invention, the specific steps of the cloud platform smart center calling, according to the event features of the event image information, a solution video for processing an event related to the event image information in the cloud platform data center include:
c1, sequentially calculating a final event feature vector {1 × p; 2 x p; 3 x p; …, respectively; i p and stored solution video feature vector { [ i } { [1*p],[i2*p],[i3*p],…,[im*p]-the euclidean distance between each of the terms;
and C2, selecting the solution video corresponding to the solution video feature vector with the minimum Euclidean distance in the C1 as the solution video for processing the event related to the event image information.
As a preferred scheme of the present invention, the event collection unit and the visualization unit are user terminal devices having functions of shooting and playing, a management system login portal for uploading event image information and downloading solution videos is installed in the user terminal devices, the management system login portal is a web page, a software APP or a applet, the cloud platform smart center and the cloud platform data center are established in a distributed data processing system constructed by a plurality of servers and a computing host for performing operation processing and data storage, and the management system login portal and the distributed data processing system perform data exchange and service interaction through network communication.
As a preferred scheme of the present invention, the present invention provides a method for a smart city visual management system based on a video cloud platform, comprising the following steps:
s1, the event generator user collects the event image information and uploads the event image information to the cloud platform smart center;
s2, the cloud platform smart center receives event image information from the event acquisition unit, performs event feature extraction on the event image information, calls a solution video which is processed aiming at an event related in the event image information in the cloud platform data center according to the event feature of the event image information, and transmits the solution video to the visualization unit;
s3, the visualization unit receives the solution video from the cloud platform smart center and displays the solution video to an event generator user;
s4, the cloud platform intelligent center fails to call any solution video processed aiming at the event related in the event image information from the cloud platform data center, and sends an artificial solution application to the artificial compensation unit;
s5, the manual compensation unit receives manual solution application from the cloud platform smart center and performs manual solution, the solution video is shot in the whole process of manual solution, and the solution video is uploaded to the cloud platform data center to be stored after the event solution is completed.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, urban events are shot into event image information by the event generator user and transmitted to the cloud platform smart center, the cloud platform smart center calls the solution video for processing the events related in the event image information from the cloud platform data center according to the event image information in a matching manner and displays the solution video to the event generator user, so that the event generator can automatically process the events according to the solution in the solution video, the autonomy of the basic people as the event generator is fully mobilized, the workload of the functional department is reduced greatly, the optimal event processing time is avoided being consumed in the process of feeding back the functional department, and the event processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
Fig. 1 is a block diagram of a visualization management system structure and a flowchart of a method according to an embodiment of the present invention.
The reference numerals in the drawings denote the following, respectively:
1-an event acquisition unit; 2-a visualization unit; 3-cloud platform intelligent center; 4-cloud platform data center; 5-artificial compensation unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the invention provides a cloud platform-based smart city visual management system, which includes an event acquisition unit 1 and a visualization unit 2 at a user end, a cloud platform smart center 3 at a service end, a cloud platform data center 4 and a manual compensation unit 5;
the system comprises an event acquisition unit 1, a cloud platform smart center 3 and a service center, wherein the event acquisition unit is used for acquiring event image information of an event generator user and uploading the event image information to the cloud platform smart center 3;
the cloud platform intelligent center 3 is used for receiving the event image information from the event acquisition unit 1, extracting event features of the event image information, calling a solution video for processing the event related to the event image information in the cloud platform data center 4 according to the event features of the event image information, transmitting the solution video to the visualization unit 2, and sending an artificial solution application to the artificial compensation unit 5 if any solution video for processing the event related to the event image information cannot be called in the cloud platform data center 4;
the visualization unit 2 is used for receiving the solution video from the cloud platform smart center 3 and displaying the solution video to an event generator user;
the manual compensation unit 5 is used for receiving manual solution applications from the cloud platform smart center 3 and performing manual solution, shooting a solution video in the whole process of the manual solution process, uploading the solution video to the cloud platform data center 4 for storage after the event solution is completed, performing solution video data expansion on the cloud platform data center, continuously and gradually filling solutions for processing various events in actual use, and expanding the application range of the system;
and the cloud platform data center 4 is used for storing a plurality of solution videos aiming at various events and providing storage and reading functions outwards.
The cloud platform smart center 3 extracts event features of the event image information by the following specific steps:
a1, processing the event image information, extracting the original event characteristic vector related in the video information;
a2, performing principal component analysis on the original event feature vector by using a principal component analysis method to obtain a final event feature vector after dimension reduction;
in the step a1, the specific steps of extracting the original event feature vector by using the image processing method are as follows:
a101, performing specification cutting on event image information to be processed by using a frame shape with a fixed specification, and obtaining a pixel image matrix which is contained in the cut event image information and marked as [ i j ], wherein i j is n, i is an image matrix row, j is an image matrix column, and n is the total number of pixels;
in the actual use process, the numerical values of I, j and n can be set according to the needs.
A102, decomposing pixels arranged in a matrix in a pixel image matrix [ i x j ] into a vector form, wherein the vector form is marked as {1 x j; 2 x j; 3 x j; …, respectively; i x j };
a103, converting {1 × j; 2 x j; 3 x j; …, respectively; i j is stored as the original event feature vector describing the event image information.
In the step a2, the principal component analysis method performs principal component analysis on the original event feature vector to obtain a final event feature vector, and the method comprises the following specific steps:
a201, sequentially converting original event feature vectors {1 xj; 2 x j; 3 x j; …, respectively; subtracting the row average value from each row in i x j } to perform decentralized processing;
a202, performing covariance matrix calculation on the original event feature vector subjected to the decentralized processing to obtain an original event feature covariance matrix mark [ j x j ];
a203, calculating eigenvalues and eigenvectors of the covariance matrix [ j x j ] of the original event features through SVD, and respectively obtaining j eigenvalues and eigenvectors;
a204, sequencing j eigenvalues from large to small, and selecting p corresponding eigenvectors with the largest eigenvalue to respectively form a projection matrix [ j × p ];
a205, original event feature vectors {1 xj; 2 x j; 3 x j; …, respectively; multiplying the i by j by a projection matrix [ j by p ] and reducing dimensions to obtain a final event feature vector mark as {1 by p; 2 x p; 3 x p; …, respectively; i x p }.
The p value is far smaller than the i value, the selection of the p value can be chosen according to computing hardware, if the performance of a server and a computing host in the distributed processing system is good, a larger p value can be chosen, and otherwise, a smaller p value can be chosen;
p event feature vectors are enough to represent main features contained in event image information, and compared with the method for directly calculating i original event feature vectors subsequently, the method for calculating p event feature vectors reduces the calculation complexity while ensuring the accuracy.
The solution video stored in the cloud platform data center 4 is sequentially subjected to image processing and principal component analysis and finally stored in a feature vector mode, and the original feature matrix vector of the solution video extracted after the image processing is marked as { [ i ])1*j1],[i2*j2],[i3*j3],…,[im*jm]Where m is the total number of solution videos, im*jm=nmFor the m-th video image pixel total number, imFor the m-th video image matrix row, jmFor the m-th video image matrix column, [ im*jm]Decomposition into vector form {1 x jm,2*jm,3*jm,…,im*jm}。
The method comprises the following specific steps of carrying out principal component analysis on the solution video:
b1, sequentially converting the original feature matrix vector of the solution video { [ i { [ I ]1*j1],[i2*j2],[i3*j3],…,[im*jm]Performing decentralized processing on each item in the tree;
{[i1*j1],[i2*j2],[i3*j3],…,[im*jm]each term in the item is decomposed into vector form {1 x j }1,2*j1,3*j1,…,i1*j1},{1*j2,2*j2,3*j2,…,i2*j2},{1*j3,2*j3,3*j3,…,i3*j3},…,{1*jm,2*jm,3*jm,…,im*jm}
B2, respectively carrying out covariance matrix calculation on each item of the de-centralized solution video original characteristic matrix vector to obtain a solution video original characteristic covariance matrix label [ j1*j1],[j2*j2],[j3*j3],…,[jm*jm];
B3, respectively carrying out SVD on solution video original feature covariance matrix [ j1*j1],[j2*j2],[j3*j3],…,[jm*jm]Each item of (a) is subjected to characteristic value and characteristic vector calculation to respectively obtain j1,j2,j3,…,jmEach eigenvalue and eigenvector;
j1,j2,j3,…,jmand [ j ]1*j1],[j2*j2],[j3*j3],…,[jm*jm]And correspond to each other.
B4, j respectively1,j2,j3,…,jmSorting each item in the characteristic values from large to small, and selecting p corresponding characteristic vectors with the largest characteristic values to respectively form a projection matrix [ j1*p],[j2*p],[j3*p],…,[jm*p];
B5, solving the original characteristic matrix vector of the solution video { [ i { [ I ]1*j1],[i2*j2],[i3*j3],…,[im*jm]And the projection matrix [ j ]1*p],[j2*p],[j3*p],…,[jm*p]The corresponding multiplication dimensionality reduction obtains the stored solution video feature vector mark as { [ i { [ I ]1*p],[i2*p],[i3*p],…,[im*p]In which [ im*p]={1*p;2*p;3*p;…;im*p}。
The specific steps of the cloud platform smart center 3 calling the solution video for processing the event related to the event image information in the cloud platform data center 4 according to the event characteristics of the event image information are as follows:
c1, sequentially calculating a final event feature vector {1 × p; 2 x p; 3 x p; …, respectively; i p and stored solution video feature vector { [ i } { [1*p],[i2*p],[i3*p],…,[im*p]-the euclidean distance between each of the terms;
wherein { [ i ]1*p],[i2*p],[i3*p],…,[im*p]Each term in is converted into vector form, denoted {1 × p; 2 x p; 3 x p; …, respectively; i.e. i1*p},{1*p;2*p;3*p;…;i2*p},{1*p;2*p;3*p;…;i3*p},…,{1*p;2*p;3*p;…;imP }; calculating {1 × p in sequence; 2 x p; 3 x p; …, respectively; i p and {1 p; 2 x p; 3 x p; …, respectively; i.e. i1*p},{1*p;2*p;3*p;…;i2*p},{1*p;2*p;3*p;…;i3*p},…,{1*p;2*p;3*p;…;imP } of each term.
Mixing {1 × p; 2 x p; 3 x p; …, respectively; i p and {1 p; 2 x p; 3 x p; …, respectively; i.e. i1*p},{1*p;2*p;3*p;…;i2*p},{1*p;2*p;3*p;…;i3*p},…,{1*p;2*p;3*p;…;imAnd the calculation tasks of the Euclidean distance of each item in the p are distributed to each server and the calculation host in the distributed processing system for synchronous processing, the Euclidean distance is synchronously calculated, and then the minimum value is selected only by gathering and sorting, so that the calculation efficiency is greatly improved.
And C2, selecting the solution video corresponding to the solution video feature vector with the minimum Euclidean distance in the C1 as the solution video for processing the event related to the event image information.
Wherein, the smaller the euclidean distance is, the more similar the feature vector represents that the event image information and the event solved in the solution video have, the greater the probability that the solution in the solution video can solve the event involved in the event image information.
Event acquisition unit 1 and visual unit 2 are for having the user terminal equipment who shoots and play function, install the management system that is used for uploading event image information and downloads the solution video in the user terminal equipment and log on the portal, management system logs on the portal and is webpage, software APP or applet, cloud platform wisdom center 3 and cloud platform data center 4 establish and carry out operation and data storage in the distributed data processing system that is formed by a plurality of servers and computer, management system logs on portal and distributed data processing system and carries out data exchange and service interaction through network communication.
Based on the structure of the smart city visual management system based on the video cloud platform, the invention provides a method, which comprises the following steps:
s1, the event generator user collects the event image information and uploads the event image information to the cloud platform smart center 3;
s2, the cloud platform smart center 3 receives the event image information from the event acquisition unit 1, performs event feature extraction on the event image information, calls a solution video for processing the event related to the event image information in the cloud platform data center 4 according to the event feature of the event image information, and transmits the solution video to the visualization unit 2;
s3, the visualization unit 2 receives the solution video from the cloud platform smart center 3 and displays the solution video to an event generator user;
s4, the cloud platform smart center 3 fails to call any solution video for processing the event related to the event image information from the cloud platform data center 4, and sends an artificial solution application to the artificial compensation unit 5;
s5, the manual compensation unit 5 receives manual solution application from the cloud platform smart center 3 and performs manual solution, the solution video is shot in the whole process of manual solution, and the solution video is uploaded to the cloud platform data center 4 to be stored after the event solution is completed.
According to the invention, urban events are shot into event image information by event generators and transmitted to the cloud platform smart center 3, the cloud platform smart center 3 calls a solution video for processing the events related in the event image information from the cloud platform data center 4 according to the event image information matching, and displays the solution video to the event generators, so that the event generators can automatically process the events according to the solution in the solution video, thereby fully mobilizing the autonomy of basic people as the event generators, reducing the huge workload of functional departments, avoiding consuming the optimal time for event processing in the process of feeding back the functional departments, and improving the event processing efficiency.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (9)

1. The utility model provides a visual management system in wisdom city based on cloud platform which characterized in that: the system comprises an event acquisition unit (1) and a visualization unit (2) of a user side, a cloud platform smart center (3), a cloud platform data center (4) and a manual compensation unit (5) of a server side;
the event acquisition unit (1) is used for acquiring event image information generated by an event generator user and uploading the event image information to the cloud platform smart center (3);
the cloud platform intelligent center (3) is used for receiving the event image information from the event acquisition unit (1), extracting event features of the event image information, calling a solution video for processing the event related to the event image information in the cloud platform data center (4) according to the event features of the event image information, transmitting the solution video to the visualization unit (2), and sending an artificial solution application to the artificial compensation unit (5) if any solution video for processing the event related to the event image information cannot be called in the cloud platform data center (4);
the visualization unit (2) is used for receiving the solution video from the cloud platform smart center (3) and displaying the solution video to an event generator user;
the manual compensation unit (5) is used for receiving manual solution applications from the cloud platform smart center (3) and performing manual solution, shooting a solution video in the whole process of the manual solution process, and uploading the solution video to the cloud platform data center (4) for storage after the event solution is completed;
and the cloud platform data center (4) is used for storing a plurality of solution videos aiming at various events and providing storage and reading functions outwards.
2. The cloud platform-based smart city visual management system according to claim 1, wherein: the cloud platform smart center (3) extracts event features of event image information by the following specific steps:
a1, processing the event image information, extracting the original event characteristic vector related in the video information;
and A2, performing principal component analysis on the original event feature vector by using a principal component analysis method to obtain a final event feature vector after dimension reduction.
3. The cloud platform-based smart city visual management system according to claim 2, wherein: in the step a1, the specific steps of extracting the original event feature vector by using the image processing method are as follows:
a101, performing specification cutting on event image information to be processed by using a frame shape with a fixed specification, and obtaining a pixel image matrix which is contained in the cut event image information and marked as [ i j ], wherein i j is n, i is an image matrix row, j is an image matrix column, and n is the total number of pixels;
a102, decomposing pixels arranged in a matrix in a pixel image matrix [ i x j ] into a vector form, wherein the vector form is marked as {1 x j; 2 x j; 3 x j; …, respectively; i x j };
a103, converting {1 × j; 2 x j; 3 x j; …, respectively; i j is stored as the original event feature vector describing the event image information.
4. The cloud platform-based smart city visual management system according to claim 3, wherein: in the step a2, the principal component analysis method performs principal component analysis on the original event feature vector to obtain a final event feature vector, and the method comprises the following specific steps:
a201, sequentially converting original event feature vectors {1 xj; 2 x j; 3 x j; …, respectively; subtracting the row average value from each row in i x j } to perform decentralized processing;
a202, performing covariance matrix calculation on the original event feature vector subjected to the decentralized processing to obtain an original event feature covariance matrix mark [ j x j ];
a203, calculating eigenvalues and eigenvectors of the covariance matrix [ j x j ] of the original event features through SVD, and respectively obtaining j eigenvalues and eigenvectors;
a204, sequencing j eigenvalues from large to small, and selecting p corresponding eigenvectors with the largest eigenvalue to respectively form a projection matrix [ j × p ];
a205, original event feature vectors {1 xj; 2 x j; 3 x j; …, respectively; multiplying the i by j by a projection matrix [ j by p ] and reducing dimensions to obtain a final event feature vector mark as {1 by p; 2 x p; 3 x p; …, respectively; i x p }.
5. The cloud platform-based smart city visual management system according to claim 3 or 4, wherein: the solution video stored in the cloud platform data center (4) is sequentially subjected to image processing and principal component analysis and finally stored in a characteristic vector form, and the original characteristic matrix vector of the solution video extracted after the image processing is marked as { [ i ]1*j1],[i2*j2],[i3*j3],…,[im*jm]Where m is the total number of solution videos, im*jm=nmFor the m-th video image pixel total number, imFor the m-th video image matrix row, jmFor the m-th video image matrix column, [ im*jm]Decomposition into vector form {1 x jm,2*jm,3*jm,…,im*jm}。
6. The cloud platform-based smart city visual management system according to claim 3, wherein: the method comprises the following specific steps of carrying out principal component analysis on the solution video:
b1, sequentially converting the original feature matrix vector of the solution video { [ i { [ I ]1*j1],[i2*j2],[i3*j3],…,[im*jm]Performing decentralized processing on each item in the tree;
b2, respectively carrying out covariance matrix calculation on each item of the de-centralized solution video original characteristic matrix vector to obtain a solution video original characteristic covariance matrix label [ j1*j1],[j2*j2],[j3*j3],…,[jm*jm];
B3, respectively carrying out SVD on solution video original feature covariance matrix [ j1*j1],[j2*j2],[j3*j3],…,[jm*jm]Each item of (a) is subjected to characteristic value and characteristic vector calculation to respectively obtain j1,j2,j3,…,jmEach eigenvalue and eigenvector;
b4, j respectively1,j2,j3,…,jmSorting each item in the characteristic values from large to small, and selecting p corresponding characteristic vectors with the largest characteristic values to respectively form a projection matrix [ j1*p],[j2*p],[j3*p],…,[jm*p];
B5, solving the original characteristic matrix vector of the solution video { [ i { [ I ]1*j1],[i2*j2],[i3*j3],…,[im*jm]And the projection matrix [ j ]1*p],[j2*p],[j3*p],…,[jm*p]The corresponding multiplication dimensionality reduction obtains the stored solution video feature vector mark as { [ i { [ I ]1*p],[i2*p],[i3*p],…,[im*p]In which [ im*p]={1*p;2*p;3*p;…;im*p}。
7. The cloud platform-based smart city visual management system according to claim 4, wherein the specific steps of the cloud platform smart center (3) calling a solution video in the cloud platform data center (4) for processing an event related to the event image information according to the event feature of the event image information are as follows:
c1, sequentially calculating a final event feature vector {1 × p; 2 x p; 3 x p; …, respectively; i p and stored solution video feature vector { [ i } { [1*p],[i2*p],[i3*p],…,[im*p]-the euclidean distance between each of the terms;
and C2, selecting the solution video corresponding to the solution video feature vector with the minimum Euclidean distance in the C1 as the solution video for processing the event related to the event image information.
8. The cloud platform-based smart city visual management system according to claim 2, wherein the event collection unit (1) and the visualization unit (2) are user terminal devices with shooting and playing functions, a management system login portal for uploading event image information and downloading solution videos is installed in the user terminal devices, the management system login portal is a webpage, a software APP or a small program, the cloud platform smart center (3) and the cloud platform data center (4) are built in a distributed data processing system which is built by a plurality of servers and a computer host for operation processing and data storage, and the management system login portal and the distributed data processing system perform data exchange and service interaction through network communication.
9. A method for a video cloud platform based smart city visual management system according to any one of claims 1-8, comprising the following steps:
s1, the event generator user collects the event image information and uploads the event image information to the cloud platform smart center;
s2, the cloud platform smart center receives event image information from the event acquisition unit, performs event feature extraction on the event image information, calls a solution video which is processed aiming at an event related in the event image information in the cloud platform data center according to the event feature of the event image information, and transmits the solution video to the visualization unit;
s3, the visualization unit receives the solution video from the cloud platform smart center and displays the solution video to an event generator user;
s4, the cloud platform intelligent center fails to call any solution video processed aiming at the event related in the event image information from the cloud platform data center, and sends an artificial solution application to the artificial compensation unit;
s5, the manual compensation unit receives manual solution application from the cloud platform smart center and performs manual solution, the solution video is shot in the whole process of manual solution, and the solution video is uploaded to the cloud platform data center to be stored after the event solution is completed.
CN202011269104.4A 2020-11-13 2020-11-13 Cloud platform based smart city visual management system and method Withdrawn CN112270296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011269104.4A CN112270296A (en) 2020-11-13 2020-11-13 Cloud platform based smart city visual management system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011269104.4A CN112270296A (en) 2020-11-13 2020-11-13 Cloud platform based smart city visual management system and method

Publications (1)

Publication Number Publication Date
CN112270296A true CN112270296A (en) 2021-01-26

Family

ID=74340641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011269104.4A Withdrawn CN112270296A (en) 2020-11-13 2020-11-13 Cloud platform based smart city visual management system and method

Country Status (1)

Country Link
CN (1) CN112270296A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537691A (en) * 2021-05-09 2021-10-22 武汉兴得科技有限公司 Big data public health event emergency command method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537691A (en) * 2021-05-09 2021-10-22 武汉兴得科技有限公司 Big data public health event emergency command method and system

Similar Documents

Publication Publication Date Title
US10255686B2 (en) Estimating depth from a single image
CN111696112B (en) Automatic image cutting method and system, electronic equipment and storage medium
CN111768008A (en) Federal learning method, device, equipment and storage medium
CN110830807B (en) Image compression method, device and storage medium
JP2022553252A (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, SERVER, AND COMPUTER PROGRAM
CN111291170B (en) Session recommendation method and related device based on intelligent customer service
CN110599393B (en) Picture style conversion method, device, equipment and computer readable storage medium
CN115064020B (en) Intelligent teaching method, system and storage medium based on digital twin technology
CN112364203B (en) Television video recommendation method, device, server and storage medium
CN112232889A (en) User interest portrait extension method, device, equipment and storage medium
EP4047562A1 (en) Method and apparatus for training a font generation model
CN112149642A (en) Text image recognition method and device
CN109523558A (en) A kind of portrait dividing method and system
CN111444341A (en) User portrait construction method, device and equipment and readable storage medium
CN115423353A (en) Power distribution network resource consumption scheduling method and device, electronic equipment and storage medium
CN114255187A (en) Multi-level and multi-level image optimization method and system based on big data platform
CN112084959A (en) Crowd image processing method and device
CN112270296A (en) Cloud platform based smart city visual management system and method
CN113822114A (en) Image processing method, related equipment and computer readable storage medium
CN111597361B (en) Multimedia data processing method, device, storage medium and equipment
CN111539353A (en) Image scene recognition method and device, computer equipment and storage medium
CN109617960A (en) A kind of web AR data presentation method based on attributed separation
CN111553961B (en) Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device
CN112486667B (en) Method and device for accurately processing data based on edge calculation
CN114584471A (en) Model training method and device for network data analysis function based on federal learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210126

WW01 Invention patent application withdrawn after publication