CN114095734A - User data compression method and system based on data processing - Google Patents

User data compression method and system based on data processing Download PDF

Info

Publication number
CN114095734A
CN114095734A CN202111251895.2A CN202111251895A CN114095734A CN 114095734 A CN114095734 A CN 114095734A CN 202111251895 A CN202111251895 A CN 202111251895A CN 114095734 A CN114095734 A CN 114095734A
Authority
CN
China
Prior art keywords
video
target
monitoring video
user
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111251895.2A
Other languages
Chinese (zh)
Inventor
陈正跃
夏志齐
谭子奕
宋琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111251895.2A priority Critical patent/CN114095734A/en
Publication of CN114095734A publication Critical patent/CN114095734A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • H04N19/428Recompression, e.g. by spatial or temporal decimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The invention provides a user data compression method and system based on data processing, and relates to the technical field of data processing. In the invention, user monitoring videos respectively sent by a plurality of user monitoring terminal devices are screened to obtain target monitoring videos corresponding to the user monitoring videos, wherein the user monitoring terminal devices are respectively used for carrying out image acquisition on monitored environment areas; for each target monitoring video, carrying out video frame analysis processing on the target monitoring video to obtain a corresponding video compression degree representation value; and for each target monitoring video, performing data compression processing on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video. Based on the method, the problem of poor compression effect of compression processing on the user monitoring video in the prior art can be solved.

Description

User data compression method and system based on data processing
Technical Field
The invention relates to the technical field of data processing, in particular to a user data compression method and system based on data processing.
Background
In the monitoring technology, at least a user monitoring terminal device (such as an image capturing device like a camera) disposed at a front end and a user monitoring server disposed at a back end are generally included. The user monitoring terminal equipment at the front end is generally used for monitoring a user to form a monitoring video, then the monitoring video is sent to the user monitoring server at the rear end, the user monitoring server generally screens the monitoring video after receiving the monitoring video based on the consideration of factors such as data processing capacity, and the screened monitoring video may need to be compressed based on certain storage or transmission requirements. However, the inventor has found that, in the prior art, compression processing of a surveillance video is generally performed based on a fixed compression degree, so that the compression effect is not good.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a user data compression method and system based on data processing, so as to solve the problem in the prior art that the compression effect of compressing the user monitoring video is not good.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a user data compression method based on data processing is applied to a user monitoring server, the user monitoring server is in communication connection with a plurality of user monitoring terminal devices, and the user data compression method based on data processing comprises the following steps:
screening the obtained user monitoring videos respectively sent by the user monitoring terminal devices to obtain target monitoring videos corresponding to the user monitoring videos, wherein the user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, each user monitoring video comprises a plurality of frames of user monitoring video frames, and each target monitoring video comprises at least one frame of user monitoring video frame;
for each target monitoring video, carrying out video frame analysis processing on the target monitoring video to obtain a video compression degree representation value corresponding to the target monitoring video;
and for each target monitoring video, performing data compression processing on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
In some preferred embodiments, in the data processing-based user data compression method, the step of performing video frame analysis processing on each target surveillance video to obtain a video compression degree representation value corresponding to the target surveillance video includes:
aiming at each target monitoring video, performing interframe pixel difference value calculation on each two adjacent frames of user monitoring video frames included in the target monitoring video to obtain a pixel difference value between each two adjacent frames of user monitoring video frames, and calculating a sum value of the pixel difference values between each two adjacent frames of user monitoring video frames to obtain a total pixel difference value corresponding to the target monitoring video;
and for each target monitoring video, determining a video compression degree representation value with a negative correlation relation based on the pixel total difference value corresponding to the target monitoring video.
In some preferred embodiments, in the data processing-based user data compression method, the step of performing video frame analysis processing on each target surveillance video to obtain a video compression degree representation value corresponding to the target surveillance video includes:
for each target monitoring video, carrying out user object identification processing on each frame of user monitoring video frame included in the target monitoring video to obtain a user object of the target monitoring video;
and determining a video compression degree representation value with a negative correlation relation according to the number of user objects of each target monitoring video.
In some preferred embodiments, in the data processing-based user data compression method, the step of performing, for each target surveillance video, data compression processing on the target surveillance video based on the video compression degree characterization value corresponding to the target surveillance video to obtain target surveillance video compressed data corresponding to the target surveillance video includes:
respectively determining a compression degree adjustment coefficient corresponding to each target monitoring video;
for each target monitoring video, updating the video compression degree representation value corresponding to the target monitoring video based on the compression degree adjustment coefficient corresponding to the target monitoring video to obtain an updated video compression degree representation value corresponding to the target monitoring video;
and for each target monitoring video, performing data compression processing on the target monitoring video based on the updated video compression degree representation value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
In some preferred embodiments, in the data processing-based user data compression method, the step of respectively determining the compression degree adjustment coefficient corresponding to each target surveillance video includes:
calculating a difference value between the video compression degree characterization values corresponding to two target surveillance videos aiming at every two target surveillance videos in the obtained target surveillance videos, and determining compression similarity information between the two target surveillance videos based on the difference value, wherein the difference value between the video compression degree characterization values and the corresponding compression similarity information have a negative correlation relationship;
clustering the target surveillance videos based on the obtained compression similarity information between every two target surveillance videos in the target surveillance videos to obtain at least one surveillance video cluster set corresponding to the target surveillance videos, wherein each surveillance video cluster set in the at least one surveillance video cluster set comprises at least one target surveillance video;
counting the number of the target monitoring videos included in the monitoring video cluster set aiming at each monitoring video cluster set in the at least one monitoring video cluster set to obtain the video counting number corresponding to the monitoring video cluster set;
for each monitoring video cluster set in the at least one monitoring video cluster set, counting the average value of the data volume of the target monitoring video included in the monitoring video cluster set to obtain the average value of the video statistical data volume corresponding to the monitoring video cluster set;
and aiming at each monitoring video cluster set in the at least one monitoring video cluster set, carrying out fusion processing on the video statistic number corresponding to the monitoring video cluster set and the corresponding video statistic data quantity mean value to obtain a compression degree adjusting coefficient corresponding to the monitoring video cluster set, wherein the video statistic number and the video statistic data quantity mean value have positive correlation with the compression degree adjusting coefficient.
In some preferred embodiments, in the data processing-based user data compression method, for each monitored video cluster set in the at least one monitored video cluster set, the step of performing fusion processing on the video statistics number corresponding to the monitored video cluster set and the corresponding video statistics data average value to obtain a compression degree adjustment coefficient corresponding to the monitored video cluster set includes:
determining a first weight coefficient and a second weight coefficient respectively corresponding to the video statistic number and the video statistic data average value, wherein the first weight coefficient is larger than the second weight coefficient;
and for each monitoring video cluster set in the at least one monitoring video cluster set, performing fusion processing on the video statistic number corresponding to the monitoring video cluster set and the corresponding video statistic data average value based on the first weight coefficient and the second weight coefficient to obtain a compression degree adjusting coefficient corresponding to the monitoring video cluster set.
In some preferred embodiments, in the data processing-based user data compression method, the step of, for each target surveillance video, updating the video compression degree representation value corresponding to the target surveillance video based on the compression degree adjustment coefficient corresponding to the target surveillance video to obtain an updated video compression degree representation value corresponding to the target surveillance video includes:
calculating the product of the compression degree adjustment coefficient corresponding to each target monitoring video and the video compression degree representation value corresponding to the target monitoring video aiming at each target monitoring video;
and for each target monitoring video, determining the product of the compression degree adjustment coefficient corresponding to the target monitoring video and the video compression degree representation value corresponding to the target monitoring video as the updated video compression degree representation value corresponding to the target monitoring video.
The embodiment of the invention also provides a user data compression system based on data processing, which is applied to a user monitoring server, wherein the user monitoring server is in communication connection with a plurality of user monitoring terminal devices, and the user data compression system based on data processing comprises:
the target monitoring video screening processing module is used for screening the acquired user monitoring videos respectively sent by the user monitoring terminal devices to obtain target monitoring videos corresponding to the user monitoring videos, wherein the user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, each user monitoring video comprises a plurality of frames of user monitoring video frames, and each target monitoring video comprises at least one frame of user monitoring video frame;
the target monitoring video analyzing and processing module is used for analyzing and processing video frames of each target monitoring video to obtain a video compression degree representation value corresponding to each target monitoring video;
and the target monitoring video compression processing module is used for carrying out data compression processing on each target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
In some preferred embodiments, in the above user data compression system based on data processing, the target surveillance video parsing and processing module is specifically configured to:
aiming at each target monitoring video, performing interframe pixel difference value calculation on each two adjacent frames of user monitoring video frames included in the target monitoring video to obtain a pixel difference value between each two adjacent frames of user monitoring video frames, and calculating a sum value of the pixel difference values between each two adjacent frames of user monitoring video frames to obtain a total pixel difference value corresponding to the target monitoring video;
and for each target monitoring video, determining a video compression degree representation value with a negative correlation relation based on the pixel total difference value corresponding to the target monitoring video.
In some preferred embodiments, in the above user data compression system based on data processing, the target surveillance video compression processing module is specifically configured to:
respectively determining a compression degree adjustment coefficient corresponding to each target monitoring video;
for each target monitoring video, updating the video compression degree representation value corresponding to the target monitoring video based on the compression degree adjustment coefficient corresponding to the target monitoring video to obtain an updated video compression degree representation value corresponding to the target monitoring video;
and for each target monitoring video, performing data compression processing on the target monitoring video based on the updated video compression degree representation value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
In the method and system for compressing user data based on data processing provided by the embodiments of the present invention, after the user monitoring videos respectively sent by the obtained multiple user monitoring terminal devices are screened to obtain corresponding target monitoring videos, video frame parsing processing may be performed on the target monitoring video for each target monitoring video to obtain a video compression degree characterization value corresponding to the target monitoring video, so that data compression processing may be performed on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video, and thus, compared with a conventional technical scheme in which compression processing with the same compression degree is performed on all monitoring videos, by using the technical scheme provided by the embodiments of the present invention, a better matching degree between the compression degree and the target monitoring video may be achieved, the compression effect is guaranteed, and therefore the problem that in the prior art, the compression effect of compressing the user monitoring video is poor is solved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a user monitoring server according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps included in a data processing-based user data compression method according to an embodiment of the present invention.
Fig. 3 is a system block diagram of modules included in a data processing-based user data compression system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a user monitoring server. Wherein the user monitoring server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the user data compression method based on data processing provided by the embodiment of the present invention.
For example, in some preferred embodiments, the Memory may be, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable Read-Only Memory (EPROM), electrically Erasable Read-Only Memory (EEPROM), and the like.
For example, in some preferred embodiments, the Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
For example, in some preferred embodiments, the structure shown in fig. 1 is merely illustrative, and the user monitoring server may further include more or fewer components than those shown in fig. 1, or have a different configuration than that shown in fig. 1, such as a communication unit for information interaction with other devices.
With reference to fig. 2, an embodiment of the present invention further provides a user data compression method based on data processing, which is applicable to the user monitoring server. The method steps defined by the flow related to the data processing-based user data compression method can be implemented by the user monitoring server, and the user monitoring server can be in communication connection with a plurality of user monitoring terminal devices.
The specific process shown in FIG. 2 will be described in detail below.
And step S100, screening the acquired user monitoring videos respectively sent by the plurality of user monitoring terminal devices to obtain target monitoring videos corresponding to the user monitoring videos.
In the embodiment of the present invention, the user monitoring server may perform screening processing on the obtained user monitoring videos respectively sent by the plurality of user monitoring terminal devices, so as to obtain a target monitoring video corresponding to the user monitoring video. The plurality of user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, each user monitoring video comprises a plurality of frames of user monitoring video frames, and each target monitoring video comprises at least one frame of user monitoring video frame.
And step S200, for each target monitoring video, performing video frame analysis processing on the target monitoring video to obtain a video compression degree representation value corresponding to the target monitoring video.
In the embodiment of the present invention, the user monitoring server may perform video frame analysis processing on each target monitoring video to obtain a video compression degree characterization value corresponding to the target monitoring video, where if the video compression degree characterization value is larger, the compression ratio is higher; the smaller the video compression degree representation value is, the smaller the compression ratio is, namely, the smaller the data volume difference between the compressed target monitoring video data and the target monitoring video is.
Step S300, for each target monitoring video, performing data compression processing on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
In the embodiment of the present invention, the user monitoring server may, for each target monitoring video, perform data compression processing on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video, so as to obtain target monitoring video compression data corresponding to the target monitoring video.
Based on the steps S100, S200, and S300 in the foregoing embodiments, after the user monitoring videos respectively sent by the multiple user monitoring terminal devices are screened to obtain corresponding target monitoring videos, a video frame parsing process may be performed on the target monitoring video for each target monitoring video to obtain a video compression degree characterization value corresponding to the target monitoring video, so that a data compression process may be performed on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video, and thus, compared with a conventional technical scheme in which all monitoring videos are compressed at the same compression degree, by using the technical scheme provided by the embodiments of the present invention, a better matching degree may be obtained between the compression degree and the target monitoring video, the compression effect is guaranteed, and therefore the problem that in the prior art, the compression effect of compressing the user monitoring video is poor is solved.
For example, in some preferred embodiments, the step S100 in the above embodiments may include the following steps S110, S120 and S130, which are described in detail below.
Step S110, obtaining the user monitoring videos respectively sent by the plurality of user monitoring terminal devices, and obtaining a plurality of user monitoring videos corresponding to the plurality of user monitoring terminal devices.
In the embodiment of the present invention, the user monitoring server may obtain the user monitoring videos respectively sent by the plurality of user monitoring terminal devices, and obtain a plurality of user monitoring videos corresponding to the plurality of user monitoring terminal devices. The plurality of user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, and each user monitoring video comprises a plurality of frames of user monitoring video frames.
Step S120, determining video feature information of each user surveillance video in the plurality of user surveillance videos.
In an embodiment of the present invention, the user monitoring server may determine video feature information of each of the plurality of user monitoring videos. The video feature information is used for representing the features of the corresponding user monitoring video.
Step S130, determining a video screening mode of each user monitoring video based on the video characteristic information of each user monitoring video, and screening the corresponding user monitoring video based on the video screening mode to obtain a target monitoring video corresponding to the user monitoring video.
In the embodiment of the present invention, the user monitoring server may determine a video screening manner of each user monitoring video based on the video feature information of each user monitoring video, and perform screening processing on the corresponding user monitoring video based on the video screening manner to obtain a target monitoring video corresponding to the user monitoring video. Wherein each target surveillance video comprises at least one user surveillance video frame.
Based on step S110, step S120 and step S130 in the above embodiment, after user monitoring videos respectively transmitted by a plurality of user monitoring terminal devices are acquired, video characteristic information for each of a plurality of user surveillance videos may be determined first, then, the video screening mode of each user monitoring video can be determined based on the video characteristic information of each user monitoring video, and the corresponding user monitoring video is screened based on the determined video screening mode to obtain the corresponding target monitoring video, so that, since the video screening method for screening is determined based on the video feature information of the user monitoring video, the method has the advantages that the method has high matching degree with the user monitoring video, so that the reliability of the video screening mode obtained based on the method can be guaranteed, and the problem of poor screening effect on the user monitoring video in the prior art is solved.
For example, in some preferred embodiments, the step S110 in the above embodiments may include the following steps to obtain a plurality of user monitoring videos corresponding to the plurality of user monitoring terminal devices:
firstly, when a monitoring starting instruction is received, generating corresponding monitoring starting notification information, and sending the monitoring starting notification information to each user monitoring terminal device in the plurality of user monitoring terminal devices, wherein each user monitoring terminal device is used for acquiring images of a monitoring environment area after receiving the monitoring starting notification information;
and secondly, respectively acquiring a user monitoring video acquired and sent by each user monitoring terminal device in the plurality of user monitoring terminal devices based on the monitoring starting notification information.
For example, in some preferred embodiments, the step of generating corresponding monitoring start notification information when receiving the monitoring start instruction, and sending the monitoring start notification information to each of the plurality of user monitoring terminal devices may include:
firstly, judging whether a monitoring starting instruction is received or not;
secondly, when the monitoring starting instruction is judged to be received, monitoring starting notification information carrying an equipment synchronization instruction is generated and sent to each user monitoring terminal equipment in the plurality of user monitoring terminal equipment, wherein each user monitoring terminal equipment is used for sending starting confirmation information to each other user monitoring terminal equipment based on the equipment synchronization instruction carried in the monitoring starting notification information after receiving the monitoring starting notification information, and starting image acquisition on the monitored environment area after receiving the starting confirmation information sent by each other user monitoring terminal equipment.
For another example, in another preferred embodiment, the step of generating corresponding monitoring start notification information when receiving the monitoring start instruction, and sending the monitoring start notification information to each of the plurality of user monitoring terminal devices may include:
firstly, judging whether a monitoring starting instruction is received or not;
secondly, when the monitoring starting instruction is judged to be received, monitoring starting notification information carrying a monitoring stopping instruction is generated, and the monitoring starting notification information is sent to each user monitoring terminal device in the plurality of user monitoring terminal devices, wherein each user monitoring terminal device is used for starting image acquisition on the monitored environment area after receiving the monitoring starting notification information, acquiring the data volume of the acquired user monitoring video based on the monitoring stopping instruction carried in the monitoring starting notification information, and stopping image acquisition on the monitored environment area when the data volume of the currently acquired user monitoring video is larger than or equal to a data volume threshold value (can be configured according to actual requirements).
For example, in some preferred embodiments, the step of respectively obtaining the user monitoring video acquired and sent by each of the plurality of user monitoring terminal devices based on the monitoring start notification information may include:
firstly, after the monitoring start notification information is sent to each user monitoring terminal device in the plurality of user monitoring terminal devices, current time information is obtained;
secondly, judging whether the current time information belongs to target time information or not, and generating corresponding monitoring stop notification information when the current time information belongs to the target time information;
and then, respectively sending the monitoring stop notification information to each user monitoring terminal device in the plurality of user monitoring terminal devices, wherein each user monitoring terminal device is used for stopping image acquisition in the monitoring environment area after receiving the monitoring stop notification information, and sending the acquired user monitoring video to the user monitoring server.
For example, in some preferred embodiments, the step S120 in the above embodiments may include the following steps to determine the video feature information of each user monitoring video:
firstly, for each user monitoring video in the plurality of user monitoring videos, performing object recognition processing (for example, recognizing based on a neural network model for performing object recognition) on a user monitoring video frame included in the user monitoring video to obtain a target user object corresponding to the user monitoring video frame included in the user monitoring video;
secondly, for each user monitoring video in the plurality of user monitoring videos, determining the object identity information of the target user object corresponding to the user monitoring video frame included in the user monitoring video as the video feature information of the user monitoring video.
For another example, in other preferred embodiments, the step S120 in the above embodiments may include the following steps to determine the video feature information of each user monitoring video:
firstly, aiming at each user monitoring video in the plurality of user monitoring videos, determining a monitoring environment area where the user monitoring terminal equipment corresponding to the user monitoring video is located;
secondly, for each user monitoring video in the plurality of user monitoring videos, determining the area position information of the monitoring environment area where the user monitoring terminal equipment corresponding to the user monitoring video is located as the video characteristic information of the user monitoring video.
For example, in some preferred embodiments, the step S130 in the foregoing embodiments may include the following steps, so as to perform screening processing on the corresponding user monitoring video based on the video screening manner, to obtain a target monitoring video corresponding to the user monitoring video:
firstly, for every two user monitoring videos in the plurality of user monitoring videos, determining a video feature correlation representation value between the two user monitoring videos based on the video feature information of the two user monitoring videos, wherein the video feature correlation representation value is used for representing the video feature correlation degree between the two corresponding user monitoring videos;
secondly, determining a video screening mode of each user monitoring video based on a video feature correlation characteristic value between every two user monitoring videos in the plurality of user monitoring videos, and screening the corresponding user monitoring video based on the video screening mode of each user monitoring video to obtain a target monitoring video corresponding to the user monitoring video.
For example, in some preferred embodiments, the step of determining, for each two user surveillance videos in the plurality of user surveillance videos, a video feature correlation relationship characterization value between the two user surveillance videos based on the video feature information of the two user surveillance videos may include:
firstly, for every two user monitoring videos in the plurality of user monitoring videos, calculating video frame similarity between every two user monitoring video frames included in the two user monitoring videos, and calculating an average value of the video frame similarity between every two user monitoring video frames included in the two user monitoring videos as a first characteristic correlation relation representation value between the two user monitoring videos;
secondly, for every two user monitoring videos in the plurality of user monitoring videos, respectively counting object identity information of the target user objects identified in the two user monitoring videos, and regarding object correlation between the object identity information of the target user objects in the two user monitoring videos (for example, determining correlation between the target user objects based on the object identity information, if the correlation between a couple is greater than that of a relativity, the correlation between a relativity and a friend may be greater than that of a co-worker, wherein a specific correlation degree value may be defined and configured in advance) as a second feature correlation representation value between the two user monitoring videos;
then, for every two user monitoring videos in the plurality of user monitoring videos, calculating area position distance information between area position information of monitoring environment areas where the user monitoring terminal devices are located corresponding to the two user monitoring videos, and determining an area position distance representation value with a negative correlation based on the area position distance information to serve as a third feature correlation representation value between the two user monitoring videos;
then, obtaining a first weight coefficient, a second weight coefficient and a third weight coefficient corresponding to the first feature correlation characterization value, the second feature correlation characterization value and the third feature correlation characterization value respectively, wherein the sum of the first weight coefficient, the second weight coefficient and the third weight coefficient is 1, the first weight coefficient is greater than the second weight coefficient, and the second weight coefficient is greater than the third weight coefficient;
finally, for every two user monitoring videos in the plurality of user monitoring videos, based on the first weight coefficient, the second weight coefficient and the third weight coefficient, performing weighted summation calculation on the first feature correlation characteristic value, the second feature correlation characteristic value and the third feature correlation characteristic value between the two user monitoring videos to obtain a video feature correlation characteristic value between the two user monitoring videos.
For example, in some preferred embodiments, the step of determining a video screening manner of each user surveillance video based on a video feature correlation characterization value between every two user surveillance videos in the plurality of user surveillance videos, and performing screening processing on the corresponding user surveillance video based on the video screening manner of each user surveillance video to obtain a target surveillance video corresponding to the user surveillance video may include:
firstly, clustering processing (such as KNN algorithm) is carried out on the user monitoring videos based on a video feature correlation representation value between every two user monitoring videos in the user monitoring videos to obtain at least one corresponding monitoring video set, wherein each monitoring video set in the at least one monitoring video set comprises at least one user monitoring video;
secondly, counting the number of the user monitoring videos included in each monitoring video set to obtain the number of target videos corresponding to the monitoring video set, and determining a screening degree characterization value which has a positive correlation with the number of the target videos and corresponds to the monitoring video set based on the number of the target videos, wherein the screening degree characterization value is used for characterizing the maximum proportion or the maximum number of the screened user monitoring video frames after screening processing is performed on each user monitoring video in the corresponding monitoring video set;
then, for each monitoring video set, sequentially determining each user monitoring video included in the monitoring video set as a first user monitoring video, and executing a target screening operation based on a screening degree characterization value corresponding to the monitoring video set to obtain a target monitoring video corresponding to each user monitoring video included in the monitoring video set.
For example, in some preferred embodiments, the target screening operation in the above embodiments may include the following first to sixth steps:
the method comprises the steps that firstly, the video quantity of user monitoring videos included in a monitoring video set where a first user monitoring video is located is counted, and whether the video quantity is larger than a first preset value (such as 1) or not is determined;
secondly, if the number of the videos is larger than the first preset value, determining at least one user monitoring video with the largest video feature correlation relationship representation value between the user monitoring video and the first user monitoring video from the user monitoring videos included in the monitoring video set where the first user monitoring video is located, and using the user monitoring video as the associated user monitoring video of the first user monitoring video;
thirdly, if the number of the videos is less than or equal to the first preset value, based on the video feature correlation relationship representation values between the user monitoring videos included in each two monitoring video sets, determining a monitoring video set with the largest set correlation characteristic value between the monitoring video sets with the first user monitoring video (wherein the set correlation characteristic value is the average value of video feature correlation characteristic values between user monitoring videos included in two monitoring video sets) as a target monitoring video set in other monitoring video sets, determining at least one user monitoring video with the largest video feature correlation representation value between the user monitoring video and the first user monitoring video from the user monitoring videos included in the target monitoring video set, and using the user monitoring video as a related user monitoring video of the first user monitoring video;
fourthly, in at least one associated user monitoring video corresponding to the first user monitoring video, determining at least one associated user monitoring video with the smallest area position distance between corresponding user monitoring terminal equipment as at least one target associated user monitoring video, and determining the association degree of each target associated user monitoring video and the first user monitoring video in the data volume dimension (the smaller the data volume difference value is, the larger the corresponding data volume association degree is), so as to obtain at least one data volume association degree;
fifthly, determining a target data volume relevance degree in the at least one data volume relevance degree, and screening out at least one representative data volume relevance degree from the at least one data volume relevance degree based on the target data volume relevance degree, wherein the target data volume relevance degree is an average value of the at least one data volume relevance degree, and each representative data volume relevance degree is greater than or equal to the target data volume relevance degree;
and sixthly, updating the screening degree representation value corresponding to the monitoring video set where the first user monitoring video is located based on the quantity of the representative data volume relevance (if the quantity of the representative data volume relevance is larger, the updated screening degree representation value is larger, and if the quantity of the representative data volume relevance is smaller, the updated screening degree representation value is smaller), screening the first user monitoring video based on the updated screening degree representation value (if a part of user monitoring video frames with the maximum similarity are screened, the maximum proportion or the maximum quantity of screening is determined based on the updated screening degree representation value), and obtaining the target monitoring video corresponding to the first user monitoring video.
For example, in some preferred embodiments, the step S300 in the above embodiments may include the following steps to obtain the video compression degree characterization value corresponding to each target surveillance video:
firstly, aiming at each target monitoring video, performing interframe pixel difference value calculation (namely calculating the pixel difference value of a corresponding pixel point between two frames of user monitoring video frames) on each two adjacent frames of user monitoring video frames included in the target monitoring video to obtain the pixel difference value between each two adjacent frames of user monitoring video frames, and calculating the sum value of the pixel difference values between each two adjacent frames of user monitoring video frames to obtain the total pixel difference value corresponding to the target monitoring video;
secondly, for each target monitoring video, determining a video compression degree representation value with a negative correlation relation based on the pixel total difference value corresponding to the target monitoring video.
For another example, in other preferred embodiments, the step S200 in the above embodiments may include the following steps to obtain the video compression degree characterization value corresponding to each target surveillance video:
firstly, aiming at each target monitoring video, carrying out user object identification processing (such as identification based on the existing object identification neural network model) on each frame of user monitoring video frame included in the target monitoring video to obtain a user object of the target monitoring video;
secondly, determining a video compression degree representation value with a negative correlation relation according to the number of user objects of each target monitoring video.
For example, in some preferred embodiments, the step S300 in the above embodiments may include the following steps to perform data compression processing on each target surveillance video to obtain target surveillance video compressed data corresponding to each target surveillance video:
firstly, respectively determining a compression degree adjusting coefficient corresponding to each target monitoring video;
secondly, updating the video compression degree representation value corresponding to each target monitoring video based on the compression degree adjustment coefficient corresponding to the target monitoring video to obtain an updated video compression degree representation value corresponding to the target monitoring video;
then, for each target monitoring video, performing data compression processing on the target monitoring video based on the updated video compression degree representation value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
For example, in some preferred embodiments, the step of determining the compression degree adjustment coefficient corresponding to each target surveillance video separately may include the following steps:
firstly, for every two target surveillance videos in the obtained multiple target surveillance videos, calculating a difference value between the video compression degree characterization values corresponding to the two target surveillance videos, and determining compression similarity information between the two target surveillance videos based on the difference value, wherein the difference value between the video compression degree characterization values and the corresponding compression similarity information have a negative correlation relationship, namely the larger the difference value is, the higher the corresponding compression similarity is;
secondly, based on the obtained compressed similarity information between every two target surveillance videos in the multiple target surveillance videos, performing clustering processing on the multiple target surveillance videos (for example, clustering based on a KNN algorithm in the prior art) to obtain at least one surveillance video cluster set corresponding to the multiple target surveillance videos, wherein each surveillance video cluster set in the at least one surveillance video cluster set comprises at least one target surveillance video;
then, counting the number of the target surveillance videos included in the surveillance video cluster set aiming at each surveillance video cluster set in the at least one surveillance video cluster set to obtain the video counting number corresponding to the surveillance video cluster set;
then, for each monitoring video cluster set in the at least one monitoring video cluster set, counting the average value of the data volume of the target monitoring video included in the monitoring video cluster set to obtain the average value of the video statistical data volume corresponding to the monitoring video cluster set;
and finally, aiming at each monitoring video cluster set in the at least one monitoring video cluster set, carrying out fusion processing on the video statistic number corresponding to the monitoring video cluster set and the corresponding video statistic data volume mean value to obtain a compression degree adjusting coefficient corresponding to the monitoring video cluster set, wherein the video statistic number and the video statistic data volume mean value have positive correlation with the compression degree adjusting coefficient.
For example, in some preferred embodiments, the step of, for each of the at least one surveillance video cluster set, performing fusion processing on the video statistics number corresponding to the surveillance video cluster set and the corresponding video statistics data volume average value to obtain the compression degree adjustment coefficient corresponding to the surveillance video cluster set may include the following steps:
firstly, determining a first weight coefficient and a second weight coefficient respectively corresponding to the video statistic number and the video statistic data volume average value, wherein the first weight coefficient is larger than the second weight coefficient;
secondly, for each monitoring video cluster set in the at least one monitoring video cluster set, performing fusion processing (weighted summation) on the video statistic number corresponding to the monitoring video cluster set and the average value of the corresponding video statistic data amount based on the first weight coefficient and the second weight coefficient to obtain a compression degree adjusting coefficient corresponding to the monitoring video cluster set.
For example, in some preferred embodiments, the step of, for each target surveillance video, updating the video compression degree representation value corresponding to the target surveillance video based on the compression degree adjustment coefficient corresponding to the target surveillance video to obtain an updated video compression degree representation value corresponding to the target surveillance video may include the following steps:
firstly, aiming at each target monitoring video, calculating the product of the compression degree adjusting coefficient corresponding to the target monitoring video and the video compression degree representation value corresponding to the target monitoring video;
secondly, for each target surveillance video, determining the product of the compression degree adjustment coefficient corresponding to the target surveillance video and the video compression degree representation value corresponding to the target surveillance video as the updated video compression degree representation value corresponding to the target surveillance video.
With reference to fig. 3, an embodiment of the present invention further provides a user data compression system based on data processing, which is applicable to the user monitoring server. The user data compression system based on data processing may include the following modules:
the target monitoring video screening processing module is used for screening the acquired user monitoring videos respectively sent by the user monitoring terminal devices to obtain target monitoring videos corresponding to the user monitoring videos, wherein the user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, each user monitoring video comprises a plurality of frames of user monitoring video frames, and each target monitoring video comprises at least one frame of user monitoring video frame;
the target monitoring video analyzing and processing module is used for analyzing and processing video frames of each target monitoring video to obtain a video compression degree representation value corresponding to each target monitoring video;
and the target monitoring video compression processing module is used for carrying out data compression processing on each target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
For example, in some preferred embodiments, the target surveillance video parsing module may be specifically configured to implement: aiming at each target monitoring video, performing interframe pixel difference value calculation on each two adjacent frames of user monitoring video frames included in the target monitoring video to obtain a pixel difference value between each two adjacent frames of user monitoring video frames, and calculating a sum value of the pixel difference values between each two adjacent frames of user monitoring video frames to obtain a total pixel difference value corresponding to the target monitoring video; and for each target monitoring video, determining a video compression degree representation value with a negative correlation relation based on the pixel total difference value corresponding to the target monitoring video.
For example, in some preferred embodiments, the target surveillance video compression processing module may be specifically configured to implement: respectively determining a compression degree adjustment coefficient corresponding to each target monitoring video; for each target monitoring video, updating the video compression degree representation value corresponding to the target monitoring video based on the compression degree adjustment coefficient corresponding to the target monitoring video to obtain an updated video compression degree representation value corresponding to the target monitoring video; and for each target monitoring video, performing data compression processing on the target monitoring video based on the updated video compression degree representation value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
In summary, after the user monitoring videos respectively sent by the multiple user monitoring terminal devices are screened to obtain corresponding target monitoring videos, video frame analysis processing may be performed on the target monitoring videos to obtain video compression degree characterization values corresponding to the target monitoring videos, so that data compression processing may be performed on the target monitoring videos based on the video compression degree characterization values corresponding to the target monitoring videos to obtain target monitoring video compression data corresponding to the target monitoring videos, and thus, compared with a conventional technical scheme in which compression processing with the same compression degree is performed on all monitoring videos, by using the technical scheme provided by the embodiments of the present invention, a better matching degree may be obtained between the compression degree and the target monitoring videos, the compression effect is guaranteed, and therefore the problem that in the prior art, the compression effect of compressing the user monitoring video is poor is solved.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A user data compression method based on data processing is characterized in that the method is applied to a user monitoring server, the user monitoring server is in communication connection with a plurality of user monitoring terminal devices, and the user data compression method based on data processing comprises the following steps:
screening the obtained user monitoring videos respectively sent by the user monitoring terminal devices to obtain target monitoring videos corresponding to the user monitoring videos, wherein the user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, each user monitoring video comprises a plurality of frames of user monitoring video frames, and each target monitoring video comprises at least one frame of user monitoring video frame;
for each target monitoring video, carrying out video frame analysis processing on the target monitoring video to obtain a video compression degree representation value corresponding to the target monitoring video;
and for each target monitoring video, performing data compression processing on the target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
2. The data processing-based user data compression method according to claim 1, wherein the step of performing video frame parsing on each target surveillance video to obtain a video compression degree representation value corresponding to the target surveillance video comprises:
aiming at each target monitoring video, performing interframe pixel difference value calculation on each two adjacent frames of user monitoring video frames included in the target monitoring video to obtain a pixel difference value between each two adjacent frames of user monitoring video frames, and calculating a sum value of the pixel difference values between each two adjacent frames of user monitoring video frames to obtain a total pixel difference value corresponding to the target monitoring video;
and for each target monitoring video, determining a video compression degree representation value with a negative correlation relation based on the pixel total difference value corresponding to the target monitoring video.
3. The data processing-based user data compression method according to claim 1, wherein the step of performing video frame parsing on each target surveillance video to obtain a video compression degree representation value corresponding to the target surveillance video comprises:
for each target monitoring video, carrying out user object identification processing on each frame of user monitoring video frame included in the target monitoring video to obtain a user object of the target monitoring video;
and determining a video compression degree representation value with a negative correlation relation according to the number of user objects of each target monitoring video.
4. The data processing-based user data compression method according to any one of claims 1 to 3, wherein for each target surveillance video, the step of performing data compression processing on the target surveillance video based on the video compression degree characterization value corresponding to the target surveillance video to obtain target surveillance video compressed data corresponding to the target surveillance video includes:
respectively determining a compression degree adjustment coefficient corresponding to each target monitoring video;
for each target monitoring video, updating the video compression degree representation value corresponding to the target monitoring video based on the compression degree adjustment coefficient corresponding to the target monitoring video to obtain an updated video compression degree representation value corresponding to the target monitoring video;
and for each target monitoring video, performing data compression processing on the target monitoring video based on the updated video compression degree representation value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
5. The data processing-based user data compression method according to claim 4, wherein the step of determining the compression degree adjustment coefficient corresponding to each target surveillance video respectively comprises:
calculating a difference value between the video compression degree characterization values corresponding to two target surveillance videos aiming at every two target surveillance videos in the obtained target surveillance videos, and determining compression similarity information between the two target surveillance videos based on the difference value, wherein the difference value between the video compression degree characterization values and the corresponding compression similarity information have a negative correlation relationship;
clustering the target surveillance videos based on the obtained compression similarity information between every two target surveillance videos in the target surveillance videos to obtain at least one surveillance video cluster set corresponding to the target surveillance videos, wherein each surveillance video cluster set in the at least one surveillance video cluster set comprises at least one target surveillance video;
counting the number of the target monitoring videos included in the monitoring video cluster set aiming at each monitoring video cluster set in the at least one monitoring video cluster set to obtain the video counting number corresponding to the monitoring video cluster set;
for each monitoring video cluster set in the at least one monitoring video cluster set, counting the average value of the data volume of the target monitoring video included in the monitoring video cluster set to obtain the average value of the video statistical data volume corresponding to the monitoring video cluster set;
and aiming at each monitoring video cluster set in the at least one monitoring video cluster set, carrying out fusion processing on the video statistic number corresponding to the monitoring video cluster set and the corresponding video statistic data quantity mean value to obtain a compression degree adjusting coefficient corresponding to the monitoring video cluster set, wherein the video statistic number and the video statistic data quantity mean value have positive correlation with the compression degree adjusting coefficient.
6. The data processing-based user data compression method according to claim 5, wherein the step of performing fusion processing on the video statistics number corresponding to each of the at least one surveillance video cluster set and the video statistics data quantity average value corresponding to the surveillance video cluster set to obtain the compression degree adjustment coefficient corresponding to the surveillance video cluster set includes:
determining a first weight coefficient and a second weight coefficient respectively corresponding to the video statistic number and the video statistic data average value, wherein the first weight coefficient is larger than the second weight coefficient;
and for each monitoring video cluster set in the at least one monitoring video cluster set, performing fusion processing on the video statistic number corresponding to the monitoring video cluster set and the corresponding video statistic data average value based on the first weight coefficient and the second weight coefficient to obtain a compression degree adjusting coefficient corresponding to the monitoring video cluster set.
7. The method as claimed in claim 4, wherein the step of updating the video compression degree representation value corresponding to the target surveillance video based on the compression degree adjustment coefficient corresponding to the target surveillance video for each target surveillance video to obtain an updated video compression degree representation value corresponding to the target surveillance video comprises:
calculating the product of the compression degree adjustment coefficient corresponding to each target monitoring video and the video compression degree representation value corresponding to the target monitoring video aiming at each target monitoring video;
and for each target monitoring video, determining the product of the compression degree adjustment coefficient corresponding to the target monitoring video and the video compression degree representation value corresponding to the target monitoring video as the updated video compression degree representation value corresponding to the target monitoring video.
8. A user data compression system based on data processing is characterized in that the system is applied to a user monitoring server, the user monitoring server is in communication connection with a plurality of user monitoring terminal devices, and the user data compression system based on data processing comprises:
the target monitoring video screening processing module is used for screening the acquired user monitoring videos respectively sent by the user monitoring terminal devices to obtain target monitoring videos corresponding to the user monitoring videos, wherein the user monitoring terminal devices are respectively used for carrying out image acquisition on the monitored environment area to obtain a plurality of corresponding user monitoring videos, each user monitoring video comprises a plurality of frames of user monitoring video frames, and each target monitoring video comprises at least one frame of user monitoring video frame;
the target monitoring video analyzing and processing module is used for analyzing and processing video frames of each target monitoring video to obtain a video compression degree representation value corresponding to each target monitoring video;
and the target monitoring video compression processing module is used for carrying out data compression processing on each target monitoring video based on the video compression degree characterization value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
9. The data processing-based user data compression system of claim 8, wherein the target surveillance video parsing processing module is specifically configured to:
aiming at each target monitoring video, performing interframe pixel difference value calculation on each two adjacent frames of user monitoring video frames included in the target monitoring video to obtain a pixel difference value between each two adjacent frames of user monitoring video frames, and calculating a sum value of the pixel difference values between each two adjacent frames of user monitoring video frames to obtain a total pixel difference value corresponding to the target monitoring video;
and for each target monitoring video, determining a video compression degree representation value with a negative correlation relation based on the pixel total difference value corresponding to the target monitoring video.
10. The data processing-based user data compression system of claim 8 or 9, wherein the target surveillance video compression processing module is specifically configured to:
respectively determining a compression degree adjustment coefficient corresponding to each target monitoring video;
for each target monitoring video, updating the video compression degree representation value corresponding to the target monitoring video based on the compression degree adjustment coefficient corresponding to the target monitoring video to obtain an updated video compression degree representation value corresponding to the target monitoring video;
and for each target monitoring video, performing data compression processing on the target monitoring video based on the updated video compression degree representation value corresponding to the target monitoring video to obtain target monitoring video compression data corresponding to the target monitoring video.
CN202111251895.2A 2021-10-26 2021-10-26 User data compression method and system based on data processing Withdrawn CN114095734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111251895.2A CN114095734A (en) 2021-10-26 2021-10-26 User data compression method and system based on data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111251895.2A CN114095734A (en) 2021-10-26 2021-10-26 User data compression method and system based on data processing

Publications (1)

Publication Number Publication Date
CN114095734A true CN114095734A (en) 2022-02-25

Family

ID=80297773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111251895.2A Withdrawn CN114095734A (en) 2021-10-26 2021-10-26 User data compression method and system based on data processing

Country Status (1)

Country Link
CN (1) CN114095734A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863364A (en) * 2022-05-20 2022-08-05 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863364A (en) * 2022-05-20 2022-08-05 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring
CN114863364B (en) * 2022-05-20 2023-03-07 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring

Similar Documents

Publication Publication Date Title
CN114140713A (en) Image recognition system and image recognition method
CN114140710A (en) Monitoring data transmission method and system based on data processing
CN114140712A (en) Automatic image recognition and distribution system and method
CN114581856A (en) Agricultural unit motion state identification method and system based on Beidou system and cloud platform
CN113868471A (en) Data matching method and system based on monitoring equipment relationship
CN114095734A (en) User data compression method and system based on data processing
CN114139016A (en) Data processing method and system for intelligent cell
CN114697618A (en) Building control method and system based on mobile terminal
CN113902993A (en) Environmental state analysis method and system based on environmental monitoring
CN115065842B (en) Panoramic video streaming interaction method and system based on virtual reality
CN115375886A (en) Data acquisition method and system based on cloud computing service
CN115457467A (en) Building quality hidden danger positioning method and system based on data mining
CN114677615A (en) Environment detection method and system
CN115330140A (en) Building risk prediction method based on data mining and prediction system thereof
CN114189535A (en) Service request method and system based on smart city data
CN113949881A (en) Service processing method and system based on smart city data
CN114173086A (en) User data screening method based on data processing
CN114153654A (en) User data backup method and system based on data processing
CN113676362A (en) Internet of things equipment binding method and system based on data processing
CN114156495B (en) Laminated battery assembly processing method and system based on big data
CN114418555B (en) Project information management method and system applied to intelligent construction
CN114201676A (en) User recommendation method and system based on intelligent cell user matching
CN114139017A (en) Safety protection method and system for intelligent cell
CN113868339A (en) Data synchronization method and system based on monitoring equipment relationship
CN114140709A (en) Monitoring data distribution method and system based on data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220225

WW01 Invention patent application withdrawn after publication