CN114140714A - Data processing method and system for smart city - Google Patents

Data processing method and system for smart city Download PDF

Info

Publication number
CN114140714A
CN114140714A CN202111348156.5A CN202111348156A CN114140714A CN 114140714 A CN114140714 A CN 114140714A CN 202111348156 A CN202111348156 A CN 202111348156A CN 114140714 A CN114140714 A CN 114140714A
Authority
CN
China
Prior art keywords
video
target
request
processed
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111348156.5A
Other languages
Chinese (zh)
Inventor
赵茜茜
许评
杨万广
张承彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111348156.5A priority Critical patent/CN114140714A/en
Publication of CN114140714A publication Critical patent/CN114140714A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Abstract

The invention provides a data processing method and a data processing system for a smart city, and relates to the technical field of data processing. In the invention, a request video to be processed sent by a target user terminal device is obtained, wherein the request video to be processed comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are a plurality of continuous video frames obtained based on image acquisition of a target area; performing video feature analysis processing on the request video to be processed to obtain screening feature information of a target video frame corresponding to the request video to be processed; and performing video frame screening processing on a plurality of frames of user request video frames included in the to-be-processed request video based on the target video frame screening characteristic information to obtain at least one corresponding frame of target user request video frame, and constructing and obtaining the corresponding target request video based on the target user request video frame. Based on the method, the problem of low reliability of screening the video frame in the prior art can be solved.

Description

Data processing method and system for smart city
Technical Field
The invention relates to the technical field of data processing, in particular to a data processing method and system for a smart city.
Background
In the construction and application of smart cities, an important requirement is positioning, but in the prior art, for some areas with complex environments, the problem of low positioning accuracy may exist. In order to solve the corresponding problem, in the prior art, a technical solution is provided, for example, a user sends an image of a current location to a background server for identification, so as to determine the current location. Before the background server performs the identification, the obtained image (video) is generally screened in order to reduce the data processing amount of the identification process, but in the prior art, the screening of the image is generally performed based on the similarity or the difference between two adjacent frames directly to perform de-duplication screening, so that the problem of low reliability of the screening of the image is likely to occur.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for processing data of a smart city to improve the problem of low reliability of screening video frames in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a data processing method of a smart city is applied to a city monitoring server, and comprises the following steps:
acquiring a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, wherein the to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are a plurality of frames of continuous video frames obtained on the basis of image acquisition on a target area;
performing video feature analysis processing on the request video to be processed to obtain target video frame screening feature information corresponding to the request video to be processed;
and performing video frame screening processing on the multiple frames of user request video frames included in the to-be-processed request video based on the target video frame screening characteristic information to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
In some preferred embodiments, in the data processing method for a smart city, the step of acquiring a video requested to be processed, which is sent by a target user terminal device connected to the smart city in response to a target request operation performed by a target user corresponding to the target user terminal device, includes:
judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, checking the to-be-processed request information to obtain a corresponding checking result;
if the verification processing result is verification processing failure, generating corresponding request rejection notification information, and sending the request rejection notification information to the target user terminal device, wherein the target user terminal device is used for displaying the request rejection notification information to a target user corresponding to the target user terminal device so as to enable the target user to stop performing target request operation;
if the verification processing result is that the verification processing is successful, generating corresponding request success notification information, and sending the request success notification information to the target user terminal equipment, wherein the target user terminal equipment user displays the request success notification information to a target user corresponding to the target user terminal equipment so as to enable the target user to perform target request operation;
and acquiring a to-be-processed request video sent by the target user terminal device in response to the target request operation performed by the target user, wherein the to-be-processed request video is obtained by performing image acquisition on the target area based on the target user terminal device in response to the target request operation, or the to-be-processed request video is a video with image information of the target area, which is obtained by selecting a stored video by the target user terminal device in response to the target request operation.
In some preferred embodiments, in the data processing method for a smart city, the step of determining whether to acquire to-be-processed request information sent by a target user terminal device in communication connection, and when receiving the to-be-processed request information sent by the target user terminal device, performing verification processing on the to-be-processed request information to obtain a corresponding verification processing result includes:
judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, analyzing the to-be-processed request information to obtain target identity information carried in the to-be-processed request information, wherein the target identity information is used for representing the identity of the target user terminal equipment or representing the identity of a target user corresponding to the target user terminal equipment;
searching in a pre-constructed target identity database to determine whether the target identity database stores the target identity information;
if the target identity database stores the target identity information, it is determined that the to-be-processed request information is successfully verified and a verification processing result is generated as a successful verification processing, and if the target identity database does not store the target identity information, it is determined that the to-be-processed request information is failed in verification processing and a verification processing result is generated as a failed verification processing.
In some preferred embodiments, in the data processing method for a smart city, the step of performing video feature analysis processing on the request video to be processed to obtain filtered feature information of a target video frame corresponding to the request video to be processed includes:
determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed;
and obtaining target video frame screening characteristic information corresponding to the to-be-processed request video based on the number of the target video frames.
In some preferred embodiments, in the data processing method for a smart city, the step of obtaining target video frame screening feature information corresponding to the to-be-processed request video based on the number of target video frames includes:
for each frame of user request video frame in the multiple frames of user request video frames included in the request video to be processed, performing first object identification processing on the user request video frame to obtain the number of first objects in the user request video frame, and performing object statistics based on the number of first objects in each frame of user request video frame in the multiple frames of user request video frames to obtain the number of first objects corresponding to the request video to be processed, wherein the first objects are static objects;
for each frame of user request video frame in the multiple frames of user request video frames included in the request video to be processed, performing second object identification processing on the user request video frame to obtain the number of second objects in the user request video frame, and performing object statistics based on the number of second objects in each frame of user request video frame in the multiple frames of user request video frames to obtain the number of second objects corresponding to the request video to be processed, wherein the second objects are dynamic objects;
counting the sum of the number of first objects and the number of second objects in the user request video frame to obtain the object counting number corresponding to the user request video frame, and performing mean value calculation based on the object counting number corresponding to each user request video frame in the user request video frame to obtain the object number mean value corresponding to the request video to be processed;
and performing multi-dimensional feature fusion calculation processing based on the number of the target video frames, the first object number, the second object number and the average value of the object numbers to obtain a target video frame screening feature credit corresponding to the request video to be processed.
In some preferred embodiments, in the data processing method for a smart city, the step of performing multidimensional feature fusion calculation processing based on the number of target video frames, the number of first objects, the number of second objects, and the mean value of the number of objects to obtain filtered feature information of target video frames corresponding to the to-be-processed request video includes:
determining a first video frame screening characteristic coefficient corresponding to the to-be-processed request video based on the number of the target video frames, wherein the number of the target video frames and the first video frame screening characteristic coefficient have a positive correlation mutual corresponding relation;
determining a second video frame screening feature coefficient corresponding to the to-be-processed request video based on the first object number, determining a third video frame screening feature coefficient corresponding to the to-be-processed request video based on the second object number, and determining a fourth video frame screening feature coefficient corresponding to the to-be-processed request video based on the object number average value, wherein the first object number and the second video frame screening feature coefficient have a negative correlation mutual correspondence, the second object number and the third video frame screening feature coefficient have a negative correlation mutual correspondence, and the object number average value and the fourth video frame screening feature coefficient have a negative correlation mutual correspondence;
and performing weighted summation calculation on the first video frame screening feature coefficient, the second video frame screening feature coefficient, the third video frame screening feature coefficient and the fourth video frame screening feature coefficient to obtain target video frame screening feature information corresponding to the to-be-processed request video, wherein the weighting coefficient corresponding to the second video frame screening feature coefficient is greater than the weighting coefficient corresponding to the fourth video frame screening feature coefficient, the weighting coefficient corresponding to the fourth video frame screening feature coefficient is greater than the weighting coefficient corresponding to the third video frame screening feature coefficient, and the weighting coefficient corresponding to the third video frame screening feature coefficient is greater than the weighting coefficient corresponding to the first video frame screening feature coefficient.
In some preferred embodiments, in the data processing method for a smart city, the step of performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target video frame screening feature information to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame includes:
determining a target screening proportionality coefficient with a positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing a maximum proportionality value of the screened video frames during video frame screening processing;
and performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening proportionality coefficient to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
The embodiment of the invention also provides a data processing system of the smart city, which is applied to the city monitoring server, and the data processing system of the smart city comprises:
the video acquisition module is used for acquiring a to-be-processed request video which is sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, wherein the to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are continuous video frames obtained based on image acquisition of a target area;
the video analysis module is used for carrying out video characteristic analysis processing on the to-be-processed request video to obtain target video frame screening characteristic information corresponding to the to-be-processed request video;
and the video screening module is used for carrying out video frame screening processing on the multi-frame user request video frames included in the to-be-processed request video based on the target video frame screening characteristic information to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
In some preferred embodiments, in the data processing system of the smart city, the video parsing module is specifically configured to:
determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed;
and obtaining target video frame screening characteristic information corresponding to the to-be-processed request video based on the number of the target video frames.
In some preferred embodiments, in the data processing system of the smart city, the video filtering module is specifically configured to:
determining a target screening proportionality coefficient with a positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing a maximum proportionality value of the screened video frames during video frame screening processing;
and performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening proportionality coefficient to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
According to the data processing method and system for the smart city, after the to-be-processed request video sent by the target user terminal device in response to the target request operation performed by the corresponding target user is obtained, video feature analysis processing can be performed on the to-be-processed request video to obtain corresponding target video frame screening feature information, then video frame screening processing is performed on the to-be-processed request video based on the target video frame screening feature information to obtain at least one corresponding frame of target user request video frame.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a city monitoring server according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps included in a data processing method for a smart city according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating modules included in a data processing system of a smart city according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a city monitoring server. Wherein the city monitoring server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the data processing method for the smart city provided by the embodiment of the present invention.
It is understood that, as an alternative implementation, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It is understood that, as an alternative implementation manner, the structure shown in fig. 1 is only an illustration, and the city monitoring server may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may include a communication unit for information interaction with other devices (e.g., user terminal devices such as mobile phones).
With reference to fig. 2, an embodiment of the present invention further provides a data processing method for a smart city, which can be applied to the city monitoring server. The method steps defined by the flow related to the data processing method of the smart city can be realized by the city monitoring server.
The specific process shown in FIG. 2 will be described in detail below.
Step S110, obtaining a to-be-processed request video sent by a target user terminal device in communication connection responding to a target request operation performed by a target user corresponding to the target user terminal device.
In this embodiment of the present invention, when executing step S110, the city monitoring server may obtain a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device. The to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are continuous video frames obtained based on image acquisition of a target area.
Step S120, performing video feature analysis processing on the request video to be processed to obtain screening feature information of a target video frame corresponding to the request video to be processed.
In this embodiment of the present invention, when the city monitoring server executes the step S120, the city monitoring server may perform video feature analysis processing on the to-be-processed request video to obtain target video frame screening feature information corresponding to the to-be-processed request video.
Step S130, performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target video frame screening feature information to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame.
In this embodiment of the present invention, when the city monitoring server executes the step S130, the city monitoring server may perform video frame screening processing on the multiple frames of user request video frames included in the to-be-processed request video based on the target video frame screening feature information, obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and construct and obtain a corresponding target request video based on the at least one frame of target user request video frame.
Based on the steps included in the data processing method, after a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a corresponding target user is acquired, video feature analysis processing may be performed on the to-be-processed request video to obtain corresponding target video frame screening feature information, and then video frame screening processing may be performed on the to-be-processed request video based on the target video frame screening feature information to obtain at least one corresponding frame of target user request video frame.
It is to be understood that, as an alternative implementation manner, the step S110 may further include the following steps to obtain the pending request video:
firstly, judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, checking the to-be-processed request information to obtain a corresponding checking result;
secondly, if the verification processing result is that the verification processing fails, generating corresponding request rejection notification information, and sending the request rejection notification information to the target user terminal device, wherein the target user terminal device is used for displaying the request rejection notification information to a target user corresponding to the target user terminal device so as to enable the target user to stop performing target request operation;
then, if the verification processing result is that the verification processing is successful, generating corresponding request success notification information, and sending the request success notification information to the target user terminal device, wherein the target user terminal device user displays the request success notification information to a target user corresponding to the target user terminal device, so that the target user performs a target request operation;
and finally, acquiring a to-be-processed request video sent by the target user terminal device in response to the target request operation performed by the target user, wherein the to-be-processed request video is obtained by performing image acquisition on the target area based on the target user terminal device in response to the target request operation, or the to-be-processed request video is a video with image information of the target area, which is obtained by selecting a stored video by the target user terminal device in response to the target request operation.
It can be understood that, as an alternative implementation manner, the step of determining whether to acquire to-be-processed request information sent by a target user terminal device in communication connection, and when receiving the to-be-processed request information sent by the target user terminal device, performing verification processing on the to-be-processed request information to obtain a corresponding verification processing result may include the following steps:
firstly, judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, analyzing the to-be-processed request information to obtain target identity information carried in the to-be-processed request information, wherein the target identity information is used for representing the identity of the target user terminal equipment or representing the identity of a target user corresponding to the target user terminal equipment;
secondly, searching in a target identity database which is constructed in advance to determine whether the target identity database stores the target identity information;
then, if the target identity information is stored in the target identity database, it is determined that the to-be-processed request information is successfully checked and a checking result that is successful in checking is generated, and if the target identity information is not stored in the target identity database, it is determined that the to-be-processed request information is failed in checking and a checking result that is failed in checking is generated.
It is to be understood that, as an alternative implementation manner, the step S120 may further include the following steps to obtain the target video frame screening feature information:
firstly, determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed;
and secondly, obtaining target video frame screening characteristic information corresponding to the to-be-processed request video based on the number of the target video frames.
It is to be understood that, as an alternative implementation manner, the step of obtaining the target video frame screening feature information corresponding to the to-be-processed request video based on the number of target video frames may include the following steps:
firstly, aiming at each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, performing first object identification processing on the user request video frames to obtain the number of first objects in the user request video frames, and performing object statistics on the number of the first objects in each frame of user request video frames in the multiple frames of user request video frames to obtain the number of the first objects corresponding to the request video to be processed, wherein the first objects are static objects (such as various buildings, plants and the like);
secondly, aiming at each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, carrying out second object identification processing on the user request video frames to obtain the number of second objects in the user request video frames, and carrying out object statistics on the number of the second objects in each frame of user request video frames in the multiple frames of user request video frames to obtain the number of the second objects corresponding to the request video to be processed, wherein the second objects are dynamic objects (such as people, vehicles and the like);
then, counting the sum of the number of first objects and the number of second objects in the user request video frame to obtain the object statistical number corresponding to the user request video frame, and performing mean value calculation based on the object statistical number corresponding to each user request video frame in the user request video frame to obtain the object number mean value corresponding to the request video to be processed;
and finally, performing multi-dimensional feature fusion calculation processing based on the number of the target video frames, the number of the first objects, the number of the second objects and the mean value of the number of the objects to obtain the screening feature information of the target video frames corresponding to the to-be-processed request video.
It is to be understood that, as an alternative implementation manner, the step of performing multidimensional feature fusion calculation processing based on the number of target video frames, the number of first objects, the number of second objects, and the average value of the number of objects to obtain the target video frame screening feature information corresponding to the to-be-processed request video may include the following steps:
firstly, determining a first video frame screening characteristic coefficient corresponding to the request video to be processed based on the number of the target video frames, wherein the number of the target video frames and the first video frame screening characteristic coefficient have a positive correlation mutual corresponding relation;
secondly, determining a second video frame screening feature coefficient corresponding to the to-be-processed request video based on the first object number, determining a third video frame screening feature coefficient corresponding to the to-be-processed request video based on the second object number, and determining a fourth video frame screening feature coefficient corresponding to the to-be-processed request video based on the object number average value, wherein the first object number and the second video frame screening feature coefficient have a negative correlation mutual correspondence, the second object number and the third video frame screening feature coefficient have a negative correlation mutual correspondence, and the object number average value and the fourth video frame screening feature coefficient have a negative correlation mutual correspondence;
then, performing weighted summation calculation on the first video frame screening feature coefficient, the second video frame screening feature coefficient, the third video frame screening feature coefficient and the fourth video frame screening feature coefficient to obtain target video frame screening feature information corresponding to the to-be-processed request video, wherein a weighting coefficient corresponding to the second video frame screening feature coefficient is greater than a weighting coefficient corresponding to the fourth video frame screening feature coefficient, a weighting coefficient corresponding to the fourth video frame screening feature coefficient is greater than a weighting coefficient corresponding to the third video frame screening feature coefficient, a weighting coefficient corresponding to the third video frame screening feature coefficient is greater than a weighting coefficient corresponding to the first video frame screening feature coefficient (the weighting coefficient corresponding to the first video frame screening feature coefficient, the weighting coefficient corresponding to the second video frame screening feature coefficient, the weighting coefficient corresponding to the third video frame screening feature coefficient, the weighting coefficient corresponding to the first video frame screening feature coefficient, the weighting coefficient corresponding to the second video frame screening feature coefficient, the weighting coefficient corresponding to the fourth video frame screening feature coefficient, and the weighting coefficient corresponding to the second video frame screening feature coefficient, The sum of the weighting coefficient corresponding to the third video frame screening feature coefficient and the weighting coefficient corresponding to the fourth video frame screening feature coefficient is 1).
It is understood that, as an alternative implementation manner, the step S130 may further include the following steps to obtain the target requested video:
firstly, determining a target screening proportionality coefficient with positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing the maximum proportionality value of the screened video frames during video frame screening processing;
then, based on the target screening proportionality coefficient, performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame.
It can be understood that, as an alternative implementation manner, the step of performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening scaling factor to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining a corresponding target request video based on the at least one frame of target user request video frame may include the following steps:
firstly, acquiring a preset target video frame set, wherein the target video frame set comprises at least one reference video frame with a static object, each reference video frame is provided with one static object, and when the target video frame set comprises a plurality of reference video frames, the static objects of any two reference video frames are different;
secondly, for each user request video frame in the multiple frames of user request video frames included in the request video to be processed, determining a distribution density value of a static object in the user request video frame, and determining a target density interval corresponding to the distribution density value in a plurality of pre-configured density intervals, wherein the distribution density value is used for representing the distribution density of the static object in the corresponding user request video frame (for example, dividing the number of static objects by the area of a scene);
then, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, if the target density interval corresponding to the user request video frame belongs to a first density interval, determining a first video frame similarity threshold value configured in advance for the first density interval as a target video frame similarity threshold value corresponding to the user request video frame;
then, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, if the target density interval corresponding to the user request video frame belongs to a second density interval, determining a second video frame similarity threshold configured in advance for the second density interval as a target video frame similarity threshold corresponding to the user request video frame, where a density value corresponding to the second density interval is greater than a density value corresponding to the first density interval, and the second video frame similarity threshold is less than the first video frame similarity threshold;
further, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, calculating video frame similarity between the user request video frame and each frame of reference video frame in the target video frame set, determining the maximum video frame similarity corresponding to the user request video frame, and determining the relative size relationship between the maximum video frame similarity corresponding to the user request video frame and the corresponding target video frame similarity;
further, for each frame of user request video frames in the multiple frames of user request video frames included in the request video to be processed, if the maximum video frame similarity corresponding to the user request video frame is greater than or equal to the corresponding target video frame similarity, determining the user request video frame as a target user request video frame, and if the maximum video frame similarity corresponding to the user request video frame is less than or equal to the corresponding target video frame similarity, determining the user request video frame as a user request video frame to be screened;
and finally, performing duplicate removal screening on the determined user request video frames to be screened based on the target screening proportion coefficient to obtain at least one frame of target user request video frame, and constructing and obtaining a corresponding target request video based on the obtained target user request video frame.
With reference to fig. 3, an embodiment of the present invention further provides a data processing method for a smart city, which can be applied to the city monitoring server. Wherein, the data processing system of the smart city may include:
the video acquisition module is used for acquiring a to-be-processed request video which is sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, wherein the to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are continuous video frames obtained based on image acquisition of a target area;
the video analysis module is used for carrying out video characteristic analysis processing on the to-be-processed request video to obtain target video frame screening characteristic information corresponding to the to-be-processed request video;
and the video screening module is used for carrying out video frame screening processing on the multi-frame user request video frames included in the to-be-processed request video based on the target video frame screening characteristic information to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
It is to be understood that, as an alternative implementation, the video parsing module is specifically configured to: determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed; and obtaining a target video frame screening characteristic credit corresponding to the to-be-processed request video based on the number of the target video frames.
It is to be understood that, as an alternative implementation, the video screening module is specifically configured to: determining a target screening proportionality coefficient with a positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing a maximum proportionality value of the screened video frames during video frame screening processing; and performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening proportionality coefficient to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
In summary, according to the data processing method and system for the smart city provided by the present invention, after a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a corresponding target user is obtained, video feature analysis processing may be performed on the to-be-processed request video to obtain corresponding target video frame screening feature information, and then video frame screening processing may be performed on the to-be-processed request video based on the target video frame screening feature information to obtain at least one corresponding frame of target user request video frame.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A data processing method of a smart city is applied to a city monitoring server and comprises the following steps:
acquiring a to-be-processed request video sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, wherein the to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are a plurality of frames of continuous video frames obtained on the basis of image acquisition on a target area;
performing video feature analysis processing on the request video to be processed to obtain target video frame screening feature information corresponding to the request video to be processed;
and performing video frame screening processing on the multiple frames of user request video frames included in the to-be-processed request video based on the target video frame screening characteristic information to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
2. The method for processing data of a smart city according to claim 1, wherein the step of acquiring the video requested to be processed, which is sent by the target user terminal device in response to the target request operation performed by the target user corresponding to the target user terminal device, comprises:
judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, checking the to-be-processed request information to obtain a corresponding checking result;
if the verification processing result is verification processing failure, generating corresponding request rejection notification information, and sending the request rejection notification information to the target user terminal device, wherein the target user terminal device is used for displaying the request rejection notification information to a target user corresponding to the target user terminal device so as to enable the target user to stop performing target request operation;
if the verification processing result is that the verification processing is successful, generating corresponding request success notification information, and sending the request success notification information to the target user terminal equipment, wherein the target user terminal equipment user displays the request success notification information to a target user corresponding to the target user terminal equipment so as to enable the target user to perform target request operation;
and acquiring a to-be-processed request video sent by the target user terminal device in response to the target request operation performed by the target user, wherein the to-be-processed request video is obtained by performing image acquisition on the target area based on the target user terminal device in response to the target request operation, or the to-be-processed request video is a video with image information of the target area, which is obtained by selecting a stored video by the target user terminal device in response to the target request operation.
3. The method according to claim 2, wherein the step of determining whether to obtain the request information to be processed sent by the target user terminal device in communication connection, and when receiving the request information to be processed sent by the target user terminal device, performing verification processing on the request information to be processed to obtain a corresponding verification processing result comprises:
judging whether to acquire to-be-processed request information sent by target user terminal equipment in communication connection, and when receiving the to-be-processed request information sent by the target user terminal equipment, analyzing the to-be-processed request information to obtain target identity information carried in the to-be-processed request information, wherein the target identity information is used for representing the identity of the target user terminal equipment or representing the identity of a target user corresponding to the target user terminal equipment;
searching in a pre-constructed target identity database to determine whether the target identity database stores the target identity information;
if the target identity database stores the target identity information, it is determined that the to-be-processed request information is successfully verified and a verification processing result is generated as a successful verification processing, and if the target identity database does not store the target identity information, it is determined that the to-be-processed request information is failed in verification processing and a verification processing result is generated as a failed verification processing.
4. The method according to claim 1, wherein the step of performing video feature analysis on the requested video to be processed to obtain the filtered feature information of the target video frame corresponding to the requested video to be processed includes:
determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed;
and obtaining target video frame screening characteristic information corresponding to the to-be-processed request video based on the number of the target video frames.
5. The method as claimed in claim 4, wherein the step of obtaining the target video frame screening feature information corresponding to the requested video to be processed based on the number of target video frames comprises:
for each frame of user request video frame in the multiple frames of user request video frames included in the request video to be processed, performing first object identification processing on the user request video frame to obtain the number of first objects in the user request video frame, and performing object statistics based on the number of first objects in each frame of user request video frame in the multiple frames of user request video frames to obtain the number of first objects corresponding to the request video to be processed, wherein the first objects are static objects;
for each frame of user request video frame in the multiple frames of user request video frames included in the request video to be processed, performing second object identification processing on the user request video frame to obtain the number of second objects in the user request video frame, and performing object statistics based on the number of second objects in each frame of user request video frame in the multiple frames of user request video frames to obtain the number of second objects corresponding to the request video to be processed, wherein the second objects are dynamic objects;
counting the sum of the number of first objects and the number of second objects in the user request video frame to obtain the object counting number corresponding to the user request video frame, and performing mean value calculation based on the object counting number corresponding to each user request video frame in the user request video frame to obtain the object number mean value corresponding to the request video to be processed;
and performing multi-dimensional feature fusion calculation processing based on the number of the target video frames, the first object number, the second object number and the average value of the object numbers to obtain the screening feature information of the target video frames corresponding to the to-be-processed request video.
6. The method according to claim 5, wherein the step of performing multidimensional feature fusion calculation processing based on the number of target video frames, the number of first objects, the number of second objects, and the mean value of the number of objects to obtain the filtered feature information of the target video frame corresponding to the request video to be processed includes:
determining a first video frame screening characteristic coefficient corresponding to the to-be-processed request video based on the number of the target video frames, wherein the number of the target video frames and the first video frame screening characteristic coefficient have a positive correlation mutual corresponding relation;
determining a second video frame screening feature coefficient corresponding to the to-be-processed request video based on the first object number, determining a third video frame screening feature coefficient corresponding to the to-be-processed request video based on the second object number, and determining a fourth video frame screening feature coefficient corresponding to the to-be-processed request video based on the object number average value, wherein the first object number and the second video frame screening feature coefficient have a negative correlation mutual correspondence, the second object number and the third video frame screening feature coefficient have a negative correlation mutual correspondence, and the object number average value and the fourth video frame screening feature coefficient have a negative correlation mutual correspondence;
and performing weighted summation calculation on the first video frame screening feature coefficient, the second video frame screening feature coefficient, the third video frame screening feature coefficient and the fourth video frame screening feature coefficient to obtain target video frame screening feature information corresponding to the to-be-processed request video, wherein the weighting coefficient corresponding to the second video frame screening feature coefficient is greater than the weighting coefficient corresponding to the fourth video frame screening feature coefficient, the weighting coefficient corresponding to the fourth video frame screening feature coefficient is greater than the weighting coefficient corresponding to the third video frame screening feature coefficient, and the weighting coefficient corresponding to the third video frame screening feature coefficient is greater than the weighting coefficient corresponding to the first video frame screening feature coefficient.
7. The method according to any one of claims 1 to 6, wherein the step of performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target video frame screening feature information to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing a corresponding target request video based on the at least one frame of target user request video frame comprises:
determining a target screening proportionality coefficient with a positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing a maximum proportionality value of the screened video frames during video frame screening processing;
and performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening proportionality coefficient to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
8. The utility model provides a data processing system in wisdom city, its characterized in that is applied to city monitoring server, data processing system in wisdom city includes:
the video acquisition module is used for acquiring a to-be-processed request video which is sent by a target user terminal device in communication connection in response to a target request operation performed by a target user corresponding to the target user terminal device, wherein the to-be-processed request video comprises a plurality of frames of user request video frames, and the plurality of frames of user request video frames are continuous video frames obtained based on image acquisition of a target area;
the video analysis module is used for carrying out video characteristic analysis processing on the to-be-processed request video to obtain target video frame screening characteristic information corresponding to the to-be-processed request video;
and the video screening module is used for carrying out video frame screening processing on the multi-frame user request video frames included in the to-be-processed request video based on the target video frame screening characteristic information to obtain at least one frame of target user request video frame corresponding to the to-be-processed request video, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
9. The data processing system of claim 8, wherein the video parsing module is specifically configured to:
determining the number of the multi-frame user request video frames included in the request video to be processed to obtain the number of target video frames corresponding to the request video to be processed;
and obtaining target video frame screening characteristic information corresponding to the to-be-processed request video based on the number of the target video frames.
10. The data processing system of a smart city of claim 8, wherein the video screening module is specifically configured to:
determining a target screening proportionality coefficient with a positive correlation based on a characteristic value corresponding to the target video frame screening characteristic information, wherein the target screening proportionality coefficient is used for representing a maximum proportionality value of the screened video frames during video frame screening processing;
and performing video frame screening processing on the multiple frames of user request video frames included in the request video to be processed based on the target screening proportionality coefficient to obtain at least one frame of target user request video frame corresponding to the request video to be processed, and constructing and obtaining the corresponding target request video based on the at least one frame of target user request video frame.
CN202111348156.5A 2021-11-15 2021-11-15 Data processing method and system for smart city Withdrawn CN114140714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111348156.5A CN114140714A (en) 2021-11-15 2021-11-15 Data processing method and system for smart city

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111348156.5A CN114140714A (en) 2021-11-15 2021-11-15 Data processing method and system for smart city

Publications (1)

Publication Number Publication Date
CN114140714A true CN114140714A (en) 2022-03-04

Family

ID=80394024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111348156.5A Withdrawn CN114140714A (en) 2021-11-15 2021-11-15 Data processing method and system for smart city

Country Status (1)

Country Link
CN (1) CN114140714A (en)

Similar Documents

Publication Publication Date Title
CN114140713A (en) Image recognition system and image recognition method
CN113949881B (en) Business processing method and system based on smart city data
CN114666473A (en) Video monitoring method, system, terminal and storage medium for farmland protection
CN114581856B (en) Agricultural unit motion state identification method and system based on Beidou system and cloud platform
CN113176978A (en) Monitoring method, system and device based on log file and readable storage medium
CN114140712A (en) Automatic image recognition and distribution system and method
CN112953738A (en) Root cause alarm positioning system, method and device and computer equipment
CN116737765A (en) Service alarm information processing method and device, electronic equipment and storage medium
CN114139016A (en) Data processing method and system for intelligent cell
CN114189535A (en) Service request method and system based on smart city data
CN115620243B (en) Pollution source monitoring method and system based on artificial intelligence and cloud platform
CN110909263B (en) Method and device for determining companion relationship of identity characteristics
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN115065842B (en) Panoramic video streaming interaction method and system based on virtual reality
CN114140714A (en) Data processing method and system for smart city
CN114095734A (en) User data compression method and system based on data processing
CN111510940B (en) Signaling analysis method and device
CN114140711A (en) Monitoring data screening method for smart city
CN115082709B (en) Remote sensing big data processing method, system and cloud platform
CN114156495B (en) Laminated battery assembly processing method and system based on big data
CN114896653A (en) Building data monitoring method and system based on BIM
CN116561508B (en) Outlier detection method, system and medium for population data based on big data
CN114095391B (en) Data detection method, baseline model construction method and electronic equipment
CN114201676A (en) User recommendation method and system based on intelligent cell user matching
CN115442392A (en) Data processing platform and data acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220304