CN114095750B - Cloud platform monitoring method and system and computer readable storage medium - Google Patents
Cloud platform monitoring method and system and computer readable storage medium Download PDFInfo
- Publication number
- CN114095750B CN114095750B CN202111380789.4A CN202111380789A CN114095750B CN 114095750 B CN114095750 B CN 114095750B CN 202111380789 A CN202111380789 A CN 202111380789A CN 114095750 B CN114095750 B CN 114095750B
- Authority
- CN
- China
- Prior art keywords
- sub
- video
- videos
- frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000003068 static effect Effects 0.000 claims abstract description 25
- 239000000284 extract Substances 0.000 claims abstract description 12
- 238000013507 mapping Methods 0.000 claims description 21
- 238000012806 monitoring device Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 description 5
- 230000015654 memory Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23109—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The application provides a cloud platform monitoring method and a related product, wherein the method comprises the following steps: the cloud platform receives a monitoring video acquired by monitoring equipment, and extracts the equipment ID of the monitoring equipment; the cloud platform extracts an acquisition cycle of the monitoring equipment and a background video in a cycle according to the equipment ID, divides the monitoring video into n sub-videos according to the acquisition cycle, and classifies the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video; the cloud platform stores n1 sub videos of the static images in the n sub videos in a static mode and stores n2 sub videos of the dynamic images in the n sub videos in a dynamic mode. The technical scheme provided by the application has the advantage of low storage cost.
Description
Technical Field
The invention relates to the technical field of internet and cloud platforms, in particular to a cloud platform monitoring method and system and a computer readable storage medium.
Background
The cloud computing platform is also called a cloud platform, and is a service based on hardware resources and software resources, and provides computing, network and storage capabilities. Cloud computing platforms can be divided into a variety of types, such as storage-type cloud platforms based on data storage. The video monitoring is the most common mode in monitoring, and the biggest problem of monitoring videos is that the file storage capacity of the monitoring videos is large, so that the monitoring cost of a cloud platform is increased.
Disclosure of Invention
The embodiment of the invention provides a cloud platform monitoring method, a cloud platform monitoring system and a computer readable storage medium, which can provide different storage modes in different modes, further reduce the storage space and reduce the storage cost of cloud platform monitoring.
In a first aspect, an embodiment of the present invention provides a cloud platform monitoring method, where the method includes the following steps:
the cloud platform receives a monitoring video acquired by monitoring equipment, and extracts the equipment ID of the monitoring equipment;
the cloud platform extracts an acquisition cycle of the monitoring equipment and a background video in one cycle according to the equipment ID, divides the monitoring video into n sub-videos according to the acquisition cycle, and classifies the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video;
the cloud platform stores n1 sub videos of static images in the n sub videos in a static mode and stores n2 sub videos of dynamic images in the n sub videos in a dynamic mode;
where n = n1+ n 2.
In a second aspect, a cloud platform monitoring system is provided, the system comprising:
the communication unit is used for receiving the monitoring video acquired by the monitoring equipment;
a processing unit for extracting a device ID of the monitoring device; extracting an acquisition cycle of the monitoring equipment and a background video in one cycle according to the equipment ID, dividing the monitoring video into n sub-videos according to the acquisition cycle, and classifying the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video; storing n1 sub-videos of the static images in the n sub-videos in a static mode, and storing n2 sub-videos of the dynamic images in the n sub-videos in a dynamic mode;
where n = n1+ n 2.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, the cloud platform receives the monitoring video acquired by the monitoring equipment, and extracts the equipment ID of the monitoring equipment; the cloud platform extracts an acquisition cycle of the monitoring equipment and a background video in a cycle according to the equipment ID, divides the monitoring video into n sub-videos according to the acquisition cycle, and classifies the n sub-videos to determine static images and dynamic images in the n sub-videos; the cloud platform stores n1 sub-videos of static images in the n sub-videos in a static mode, and stores n2 sub-videos of dynamic images in the n sub-videos in a static mode. Therefore, different types of monitoring videos can be stored respectively, the storage space is further reduced, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 2 is a flowchart illustrating a cloud platform monitoring method.
Fig. 3 is a schematic structural diagram of a cloud platform monitoring system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a terminal device, which may specifically be: the terminal device may be a terminal of an IOS system, an android system, or other systems, for example, a hong meng system, and the application does not limit the specific system, and as shown in fig. 1, the terminal device may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
The monitoring videos are obtained by shooting through a camera installed at a specific position and are generally divided into a plurality of types for the shot monitoring videos, the first type is a static image, namely the monitoring video without a target object, the video occupies a large space amount, the second type is a dynamic image, namely the monitoring video with the target object, the video occupies a certain proportion, but the dynamic image also contains a plurality of irrelevant factors, such as the background of the video, but when the dynamic image is stored, the background occupies a large storage amount in a video frame of the dynamic image, so that the storage space occupied by the dynamic image is also large, and the storage cost of the video monitored by the cloud platform is increased.
Referring to fig. 2, fig. 2 provides a cloud platform monitoring method, where as shown in fig. 2, the method is executed by a cloud platform shown in fig. 1, the cloud platform may be composed of a plurality of terminal devices shown in fig. 1, and there may be a plurality of network topology modes of the cloud platform, and the method does not limit the network topology mode of the cloud platform composed of the plurality of terminal devices, and the method includes the following steps:
step S201, a cloud platform receives a monitoring video collected by monitoring equipment, and the cloud platform extracts an equipment ID of the monitoring equipment;
for example, the receiving of the monitoring video acquired by the monitoring device by the cloud platform may specifically be by: and receiving the monitoring video in a wired communication mode or a wireless communication mode. The wireless communication method may be, for example, LTE, NR, or the like. For example, when LTE or NR is passed, the monitoring device may send the monitoring device to the cloud platform through an access network device, where the access network device includes but is not limited to: a next generation base station (generation node B, gNB), an evolved node B (eNB), a Radio Network Controller (RNC), a Node B (NB), a Base Station Controller (BSC), a Base Transceiver Station (BTS), a home base station (e.g., home evolved node B or home node B, HNB), a Base Band Unit (BBU), a transmission point (TRP), a Transmission Point (TP), a mobile switching center (mobile switching center), and the like in a fifth generation mobile communication system (5th-generation, 5G). The network device may also be a wireless controller, a Centralized Unit (CU), and/or a Distributed Unit (DU) in a Cloud Radio Access Network (CRAN) scenario, or the access network device may be a relay station, an access point, a vehicle-mounted device, a terminal device, a wearable device, and an access network device in future mobile communication or an access network device in a PLMN for future evolution, and the like.
The device ID may be a MAC address, an IP address, or the like of the device, and may be an identification number or an identification code capable of uniquely identifying the monitoring device.
Step S202, the cloud platform extracts an acquisition cycle of the monitoring equipment and a background video in one cycle according to the equipment ID, divides the monitoring video into n sub-videos according to the acquisition cycle, and classifies the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video;
for example, the capture period may be specifically a time when the monitoring device cyclically captures a specific scene video, and the background video in the period may be cyclically captures a specific scene video (the specific scene is a monitoring scene without a target object, for example, an office space without a target object (e.g., a person, a vehicle, etc.), a road without a target object (e.g., a person, a vehicle, etc.). the background video may be cyclically captured a specific scene video.
For example, in the case of monitoring video in an elevator, the monitoring device is in a rotating mode, the time for rotating the monitoring device once in forward and reverse directions circularly is 20 seconds, the above-mentioned acquisition period is 20 seconds, and the background video may be the first video shot by rotating the monitoring device once in forward and reverse directions circularly when no person is in the elevator.
For example, the still image sub-video may be: all the video frames do not contain the sub-video of the target object, and the dynamic image sub-video may be a video in which at least one video of all the video frames contains the sub-video of the target object.
Specifically, for example, the sub-video 1 includes 1024 image frames, if all the image frames in the 1024 image frames do not include the target object, the sub-video 1 is determined to be the still image sub-video, and if any one image frame in the 1024 image frames includes the target object, the sub-video 1 is determined to be the moving image sub-video.
Step S203, the cloud platform stores n1 sub videos of static images in the n sub videos in a static mode, and stores n2 sub videos of dynamic images in the n sub videos in a dynamic mode;
where n = n1+ n 2.
According to the technical scheme, a cloud platform receives a monitoring video acquired by monitoring equipment, and extracts an equipment ID of the monitoring equipment; the cloud platform extracts an acquisition cycle of the monitoring equipment and a background video in a cycle according to the equipment ID, divides the monitoring video into n sub-videos according to the acquisition cycle, and classifies the n sub-videos to determine static images and dynamic images in the n sub-videos; the cloud platform stores n1 sub videos of the static images in the n sub videos in a static mode, and stores n2 sub videos of the dynamic images in the n sub videos in a static mode. Therefore, different types of monitoring videos can be stored respectively, the storage space is further reduced, and the user experience is improved.
For example, the classifying the n sub-videos to determine the types of the n sub-videos may specifically include:
inputting each of the n sub-videos into a classifier to identify and determine the type of the n sub-videos.
The classifier may be a variety of classifiers such as a support vector machine, a neural network classifier, a deep neural network model, and the like. Of course, other classifiers with classification functions may also be used, and the specific implementation manner of the classifier is not limited in the present application.
For example, the storing, by the cloud platform, n1 sub videos of the still image in the n sub videos in a static manner specifically includes:
deleting the storage files of the n1 sub videos, and establishing a mapping relation between the n1 sub videos and the background video, wherein the mapping relation is as follows: n1 correspondence of sub-videos to background videos. The mapping relation is used for calling background video playing according to the mapping relation when n1 sub videos need to be played. In this way, n1 sub videos may not be stored, and only the background video needs to be stored.
For example, the storing n2 sub-videos of the dynamic image in the n sub-videos in a dynamic manner may specifically include:
extracting one sub video of the n2 sub videos, and performing a live storage operation on the one sub video, where the live storage operation may specifically include: classifying and identifying m video frames of a sub video respectively to determine m1 video frames containing a target object and m2 video frames not containing the target object, storing m2 frame numbers of the m2 video frames (the frame numbers are frame numbers of a sub video arranged in sequence), storing m1 frame numbers of the m1 video frames, extracting the range of the target object in each video frame of the m1 video frames, reserving image data within the range of the target object, storing image data outside the range of the target object, establishing a mapping relation between the image data within the range of the target object and the m1 frame numbers, storing the mapping relation between the image data within the range of the target object, the image data within the range of the target object and the m1 frame numbers, traversing each sub video of n2 sub videos and executing a dynamic storage operation.
When one sub-video stored in the dynamic mode needs to be played, extracting a first image frame, if the first image frame is a video frame in m2 frame numbers, directly extracting and playing the first image frame in the background video, if the first image frame is a video frame in m1 frame numbers, extracting the first image frame in the background video, extracting image data in a range of a target object corresponding to the first image frame according to a mapping relation, overlapping the image data in the range of the target object in the first image frame to obtain an overlapped image frame, and playing the overlapped image frame.
In the superimposed image frame, the image data within the range of the target object is a top-layer image of the superimposed image frame, and the first image frame is a bottom-layer image of the superimposed image frame.
By the arrangement, the storage space of the video frames can be reduced, because only the image data in the range of the target object needs to be stored, other image data does not need to be stored, and only the corresponding video frames in the background video need to be extracted according to the frame sequence number for superposition.
It should be noted that the start position of the monitoring video is the same as the start position of the background video, and the total time for acquiring the monitoring video is an integer number of acquisition cycles.
The frame number can be the frame number of a single sub-video, and since the number of the video frames of the background video and the sub-video is the same, the sequence of the frame numbers is also the same, so that the background picture at the corresponding position can be very easily extracted according to the frame number, and the superposition is convenient.
Referring to fig. 3, fig. 3 provides a cloud platform monitoring system, which includes:
the communication unit 301 is configured to receive a monitoring video acquired by monitoring equipment;
a processing unit 302 for extracting a device ID of the monitoring device; extracting an acquisition cycle of the monitoring equipment and a background video in one cycle according to the equipment ID, dividing the monitoring video into n sub-videos according to the acquisition cycle, and classifying the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video; storing n1 sub-videos of static images in the n sub-videos in a static mode, and storing n2 sub-videos of dynamic images in the n sub-videos in a dynamic mode;
where n = n1+ n 2.
Alternatively to this, the first and second parts may,
the processing unit is specifically configured to input each of the n sub-videos into the classifier to identify and determine the type of the n sub-videos.
Alternatively to this, the first and second parts may,
the processing unit is specifically configured to delete the storage file of the n1 sub videos, and establish a mapping relationship between the n1 sub videos and the background video, where the mapping relationship is: n1 correspondence of sub-videos to background videos.
Alternatively to this, the first and second parts may,
the processing unit is specifically configured to extract one sub-video of the n2 sub-videos, and perform a live storage operation on the one sub-video, where the live storage operation specifically includes: classifying and identifying m video frames of a sub-video respectively to determine m1 video frames containing a target object and m2 video frames not containing the target object, storing m2 frame numbers of the m2 video frames, storing m1 frame numbers of the m1 video frames, extracting the range of the target object in each video frame of the m1 video frames, reserving image data in the range of the target object, storing image data outside the range of the target object, establishing a mapping relation between the image data in the range of the target object and the m1 frame numbers, storing the mapping relation between the image data in the range of the target object, the image data in the range of the target object and the m1 frame numbers in a traversing manner, traversing each sub-video in the n2 sub-videos and performing a dynamic storage operation;
wherein m = m1+ m 2.
For example, the processing unit in the embodiment of the present application may also be configured to execute the refinement scheme, the alternative scheme, and the like of the embodiment shown in fig. 2, which are not described herein again.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the cloud platform monitoring methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to execute some or all of the steps of any one of the cloud platform monitoring methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may be performed in other orders or concurrently according to the present invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (7)
1. A cloud platform monitoring method is characterized by comprising the following steps:
the cloud platform receives a monitoring video acquired by monitoring equipment, and extracts the equipment ID of the monitoring equipment;
the cloud platform extracts an acquisition cycle of the monitoring equipment and a background video in one cycle according to the equipment ID, divides the monitoring video into n sub-videos according to the acquisition cycle, and classifies the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video;
the cloud platform stores n1 sub videos of static images in the n sub videos in a static mode and stores n2 sub videos of dynamic images in the n sub videos in a dynamic mode;
wherein n = n1+ n 2;
the storing the n2 sub-videos of the dynamic image in the n sub-videos in a dynamic manner specifically includes:
extracting one sub video of the n2 sub videos, and performing a live storage operation on the one sub video, where the live storage operation may specifically include: classifying and identifying m video frames of a sub-video respectively to determine m1 video frames containing a target object and m2 video frames not containing the target object, storing m2 frame numbers of the m2 video frames, storing m1 frame numbers of the m1 video frames, extracting the range of the target object in each video frame of the m1 video frames, reserving image data in the range of the target object, storing image data outside the range of the target object, establishing a mapping relation between the image data in the range of the target object and the m1 frame numbers, storing the mapping relation between the image data in the range of the target object, the image data in the range of the target object and the m1 frame numbers in a traversing manner, traversing each sub-video in the n2 sub-videos and performing a dynamic storage operation;
wherein m = m1+ m 2;
when a sub-video stored in a dynamic mode needs to be played, extracting a first image frame, if the first image frame is a video frame in m2 frame numbers, directly extracting and playing the first image frame in a background video, if the first image frame is a video frame in m1 frame numbers, extracting the first image frame in the background video, extracting image data in a range of a target object corresponding to the first image frame according to a mapping relation, overlapping the image data in the range of the target object in the first image frame to obtain an overlapped image frame, and playing the overlapped image frame;
in the superposed image frame, the image data within the range of the target object is a top-layer image of the superposed image frame, and the first image frame is a bottom-layer image of the superposed image frame.
2. The method according to claim 1, wherein the classifying the n sub-videos to determine the types of the n sub-videos specifically comprises:
inputting each of the n sub-videos into a classifier to identify and determine the type of the n sub-videos.
3. The method according to claim 1, wherein the cloud platform storing n1 sub-videos of the still image in the n sub-videos in a static manner specifically comprises:
deleting the storage file of the n1 sub videos, and establishing a mapping relation between the n1 sub videos and the background video, wherein the mapping relation is as follows: n1 correspondence of sub-videos to background videos.
4. A cloud platform monitoring system, the system comprising:
the communication unit is used for receiving the monitoring video acquired by the monitoring equipment;
a processing unit for extracting a device ID of the monitoring device; extracting an acquisition cycle of the monitoring equipment and a background video in one cycle according to the equipment ID, dividing the monitoring video into n sub-videos according to the acquisition cycle, and classifying the n sub-videos to determine the types of the n sub-videos; the types include: a still image sub-video and a moving image sub-video; storing n1 sub-videos of the static images in the n sub-videos in a static mode, and storing n2 sub-videos of the dynamic images in the n sub-videos in a dynamic mode;
wherein n = n1+ n 2;
the processing unit is specifically configured to extract one sub-video of the n2 sub-videos, and perform a live storage operation on the one sub-video, where the live storage operation specifically includes: classifying, identifying and determining m1 video frames containing a target object and m2 video frames not containing the target object respectively for m video frames of a sub-video, storing m2 frame numbers of m2 video frames, storing m1 frame numbers of m1 video frames, extracting the range of the target object in each video frame of m1 video frames, reserving image data in the range of the target object, storing image data outside the range of the target object, establishing the mapping relation between the image data in the range of the target object and the m1 frame numbers, storing and traversing the mapping relation between the image data in the range of the target object, the image data in the range of the target object and the m1 frame numbers, and performing a dynamic storage operation on each sub-video in n2 sub-videos;
wherein m = m1+ m 2;
when one sub-video stored in a dynamic mode needs to be played, extracting a first image frame, if the first image frame is a video frame in m2 frame numbers, directly extracting and playing the first image frame in the background video, if the first image frame is a video frame in m1 frame numbers, extracting the first image frame in the background video, extracting image data in a range of a target object corresponding to the first image frame according to a mapping relation, overlapping the image data in the range of the target object in the first image frame to obtain an overlapped image frame, and playing the overlapped image frame;
in the superposed image frame, the image data within the range of the target object is a top-layer image of the superposed image frame, and the first image frame is a bottom-layer image of the superposed image frame.
5. The system of claim 4,
the processing unit is specifically configured to input each of the n sub-videos into the classifier to identify and determine the type of the n sub-videos.
6. The system of claim 4,
the processing unit is specifically configured to delete the storage file of the n1 sub videos, and establish a mapping relationship between the n1 sub videos and the background video, where the mapping relationship is: n1 correspondence of sub video with background video.
7. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111380789.4A CN114095750B (en) | 2021-11-20 | 2021-11-20 | Cloud platform monitoring method and system and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111380789.4A CN114095750B (en) | 2021-11-20 | 2021-11-20 | Cloud platform monitoring method and system and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114095750A CN114095750A (en) | 2022-02-25 |
CN114095750B true CN114095750B (en) | 2022-09-02 |
Family
ID=80302308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111380789.4A Active CN114095750B (en) | 2021-11-20 | 2021-11-20 | Cloud platform monitoring method and system and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114095750B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115150371B (en) * | 2022-08-31 | 2022-12-20 | 深圳市万佳安物联科技股份有限公司 | Service processing method, system and storage medium based on cloud platform |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309351A (en) * | 2018-02-14 | 2019-10-08 | 阿里巴巴集团控股有限公司 | Video image generation, device and the computer system of data object |
CN111510645A (en) * | 2020-04-27 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Video processing method and device, computer readable medium and electronic equipment |
CN111669502A (en) * | 2020-06-19 | 2020-09-15 | 北京字节跳动网络技术有限公司 | Target object display method and device and electronic equipment |
CN113066092A (en) * | 2021-03-30 | 2021-07-02 | 联想(北京)有限公司 | Video object segmentation method and device and computer equipment |
WO2021227790A1 (en) * | 2020-05-15 | 2021-11-18 | 深圳云天励飞技术股份有限公司 | Video image target association method, apparatus, and terminal device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108063914B (en) * | 2017-11-22 | 2020-10-16 | 国政通科技股份有限公司 | Method and device for generating and playing monitoring video file and terminal equipment |
CN110197126A (en) * | 2019-05-06 | 2019-09-03 | 深圳岚锋创视网络科技有限公司 | A kind of target tracking method, device and portable terminal |
CN111901604B (en) * | 2020-09-29 | 2021-01-26 | 创新奇智(南京)科技有限公司 | Video compression method, video reconstruction method, corresponding devices, camera and video processing equipment |
CN112926385B (en) * | 2021-01-21 | 2023-01-13 | 中广(绍兴柯桥)有线信息网络有限公司 | Video processing method of monitoring equipment and related product |
CN113596473B (en) * | 2021-07-28 | 2023-06-13 | 浙江大华技术股份有限公司 | Video compression method and device |
-
2021
- 2021-11-20 CN CN202111380789.4A patent/CN114095750B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309351A (en) * | 2018-02-14 | 2019-10-08 | 阿里巴巴集团控股有限公司 | Video image generation, device and the computer system of data object |
CN111510645A (en) * | 2020-04-27 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Video processing method and device, computer readable medium and electronic equipment |
WO2021227790A1 (en) * | 2020-05-15 | 2021-11-18 | 深圳云天励飞技术股份有限公司 | Video image target association method, apparatus, and terminal device |
CN111669502A (en) * | 2020-06-19 | 2020-09-15 | 北京字节跳动网络技术有限公司 | Target object display method and device and electronic equipment |
CN113066092A (en) * | 2021-03-30 | 2021-07-02 | 联想(北京)有限公司 | Video object segmentation method and device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114095750A (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109189289B (en) | Method and device for generating icon based on screen capture image | |
CN107295352B (en) | Video compression method, device, equipment and storage medium | |
CN112270755B (en) | Three-dimensional scene construction method and device, storage medium and electronic equipment | |
CN114095750B (en) | Cloud platform monitoring method and system and computer readable storage medium | |
CN112926083B (en) | Interactive processing method based on building information model and related device | |
CN112116551A (en) | Camera shielding detection method and device, electronic equipment and storage medium | |
CN106357978B (en) | Image output method, device and terminal | |
CN110536087A (en) | Electronic equipment and its motion profile picture synthesis method, device and embedded equipment | |
CN113726586A (en) | Network slice deployment method, device, server and computer readable storage medium | |
CN116168045B (en) | Method and system for dividing sweeping lens, storage medium and electronic equipment | |
CN109784226B (en) | Face snapshot method and related device | |
CN112463385B (en) | AI data scheduling and distributing method for cloud computing and related products | |
CN110941413B (en) | Display screen generation method and related device | |
CN108924411B (en) | Photographing control method and device | |
CN109218597A (en) | Method and device for shooting image by intelligent terminal | |
CN112597334B (en) | Data processing method of communication data center | |
US10564601B2 (en) | Method and system for image processing and data transmission in network-based multi-camera environment | |
KR101814553B1 (en) | System and method for real-time image feature extraction using mobile terminal | |
CN112767348B (en) | Method and device for determining detection information | |
CN110856006A (en) | Low-delay live broadcast method and device, storage medium and electronic equipment | |
CN111127313B (en) | Face sketch conversion method and related products | |
CN112887726B (en) | Image processing method and apparatus, storage medium, and electronic device | |
CN113781341B (en) | Image processing method, device, electronic equipment and storage medium | |
CN114915550B (en) | Mobile internet big data storage configuration method and system | |
CN117177004B (en) | Content frame extraction method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Cloud platform monitoring methods, systems, and computer-readable storage media Effective date of registration: 20231226 Granted publication date: 20220902 Pledgee: Shenzhen Branch of China Merchants Bank Co.,Ltd. Pledgor: EDEN INFORMATION SERVICE Ltd. Registration number: Y2023980074482 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |