CN110198475B - Video processing method, device, equipment, server and readable storage medium - Google Patents

Video processing method, device, equipment, server and readable storage medium Download PDF

Info

Publication number
CN110198475B
CN110198475B CN201811330917.2A CN201811330917A CN110198475B CN 110198475 B CN110198475 B CN 110198475B CN 201811330917 A CN201811330917 A CN 201811330917A CN 110198475 B CN110198475 B CN 110198475B
Authority
CN
China
Prior art keywords
video
image frame
still image
image
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811330917.2A
Other languages
Chinese (zh)
Other versions
CN110198475A (en
Inventor
李祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811330917.2A priority Critical patent/CN110198475B/en
Publication of CN110198475A publication Critical patent/CN110198475A/en
Application granted granted Critical
Publication of CN110198475B publication Critical patent/CN110198475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The disclosure provides a video processing method, a video processing device, video processing equipment and a computer readable storage medium. The video processing method comprises the following steps: the method comprises the steps that a first device detects a still image in a video, wherein the still image is an image frame of which the difference value with a reference image frame is smaller than a preset threshold value; the first device removing the detected still image from the video; and the first equipment transmits the video with the still image removed to the second equipment so as to store the video, wherein the first equipment and the second equipment are connected through the Internet. Through each embodiment of the disclosure, the size of video data can be reduced, the occupied uploading bandwidth is reduced, the cloud storage space is saved, and the time of a viewer is saved.

Description

Video processing method, device, equipment, server and readable storage medium
Technical Field
The present disclosure relates to the field of storage technologies, and in particular, to a video processing method, apparatus, device, and computer-readable storage medium.
Background
In a traditional security Video system, a Video acquired by a camera is processed by an NVR (Network Video Recorder) or DVR (Digital Video Recorder) device and then stored on a disk of a local server. Compared with a traditional analog video recorder, the DVR uses a hard disk for recording, so the DVR is often called a hard disk recorder. It is a computer system for image calculation and storage, and has the functions of long-time video recording, audio recording, remote monitoring and control of image/voice and dynamic frame, etc. The main function of NVR is to receive, store and manage digital video code streams transmitted by IPC (network camera) devices through a network, thereby realizing the advantage of a distributed architecture brought by networking. In short, through NVR, multiple webcams can be viewed, browsed, played back, managed, and stored simultaneously.
With the advent of the cloud computing era, it is becoming more and more common to store these security video files using cloud storage. There are two main processing technologies for uploading a video file of a local storage device to a cloud storage: one is to synchronize the video file from the local storage device to the cloud storage device through a cloud storage gateway deployed locally. And the second method is that an uploading script or application is used in a local environment to call a network interface of the cloud storage device, so that the video file is uploaded to the cloud storage device.
The camera is mainly used in the security industry for capturing and recording the conditions of people, vehicles and moving objects around when an event occurs, and is used as a basis for event analysis and backtracking. In the two existing technical schemes for uploading the local security video to the cloud storage, the security video files occupy larger uploading bandwidth and storage space.
Disclosure of Invention
An object of the present disclosure is to provide a video processing method, apparatus, device and computer-readable storage medium for cloud storage.
According to a first aspect of an embodiment of the present disclosure, a video processing method is disclosed, which includes:
the method comprises the steps that a first device detects a still image in a video, wherein the still image is an image frame of which the difference value with a reference image frame is smaller than a preset threshold value;
the first device removing the detected still image from the video;
and the first equipment transmits the video with the still image removed to the second equipment so as to store the video, wherein the first equipment and the second equipment are connected through the Internet.
According to a second aspect of the embodiments of the present disclosure, there is disclosed a video processing apparatus comprising:
a still image detection module configured to: detecting a still image in a video, wherein the still image is an image frame of which the difference value with a reference image frame is smaller than a preset threshold value;
a still image removal module configured to: removing the detected still image from the video;
a transfer module configured to: and transmitting the video with the still image removed to a second device to store the video, wherein the video processing device is connected with the second device through the Internet.
According to a third aspect of the embodiments of the present disclosure, a storage server is disclosed, which comprises the embodiments of the video processing apparatus as described above.
According to a fourth aspect of the embodiments of the present disclosure, a machine device is disclosed, comprising a processor and a memory, the memory having stored thereon computer-readable instructions, which, when executed by the processor, implement the method of the embodiments as described above.
According to a fifth aspect of embodiments of the present disclosure, a computer-readable storage medium is disclosed, on which a computer program is stored, which, when executed by a processor, implements the method of the embodiments described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in one or more of the embodiments of the present disclosure, when a video file is stored from one device (a first device) to another device (a second device) via the internet, "still images" in the video are detected and removed, and only a portion of the video that satisfies a condition is uploaded to the second device, thereby reducing the physical size of the uploaded video, reducing the consumption of bandwidth and storage space for uploading the video file while satisfying the security service requirements, and saving the time of a video viewer.
The above as well as additional features and advantages of the present disclosure will become apparent in the following detailed description, or may be learned by the practice of the present disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become apparent from the detailed description of exemplary embodiments thereof with reference to the accompanying drawings. The accompanying drawings of the present disclosure are incorporated in and constitute a part of this specification. The drawings illustrate embodiments suitable for the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a system architecture diagram of a video storage system to which the present disclosure relates, according to an example embodiment of the present disclosure.
Fig. 2 shows a schematic flow diagram of a video processing method according to an exemplary embodiment of the present disclosure.
Fig. 3 shows a schematic flow chart of an exemplary specific implementation of step S210 of the embodiment of the video processing method shown in fig. 2.
Fig. 4 shows a schematic flow diagram of a specific implementation of a video processing method according to an exemplary embodiment of the present disclosure, in which an image frame is saved as a difference value with a reference image frame.
Fig. 5 shows a schematic flow chart of a video processing method implemented at a local end according to an exemplary embodiment of the present disclosure.
Fig. 6 shows a schematic flow chart of a video processing method implemented in the cloud according to an exemplary embodiment of the present disclosure.
Fig. 7 shows a schematic block diagram of a video processing apparatus according to an exemplary embodiment of the present disclosure.
FIG. 8 shows a schematic block diagram of a machine device according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments of the present disclosure will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 illustrates a schematic diagram of a system architecture of a video storage system according to the principles of the present disclosure, according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the video storage system 100 may include one or more cameras 110 (two cameras are shown in fig. 1 as an example) for capturing video, a first server 120, a first storage device 130, a second storage server 140, and a second storage device 150. The camera 110 may be, for example, a security camera for capturing video images of a certain area range. The first server 120 may be a local storage server, NVR, DVR, or the like, and is configured to receive videos captured by one or more cameras 110, manage the received videos, and store them on the first storage device 130. The second storage server 140 receives the uploaded video file and stores it to the second storage device 150. In one example, the camera 110, the first server 120, and the first storage device 130 are located at a local end, and the second storage server 140 and the second storage device 150 are located at a cloud end, where the local end and the cloud end may be connected through the internet, and the second storage server 140 and the second storage device 150 may also be connected through the internet. Here, "local" and "cloud" are devices or locations that obtain video without going through the internet (e.g., via a direct connection) relative to the video to be stored (or the device in which the video is located), referred to as "local", and devices or locations that obtain video without going through the internet are referred to as "cloud". Although the video storage system 100 is shown in fig. 1 as including the first server 120, the first storage device 130, and the second storage server 140 by way of example, it should be understood that the video storage system 100 may not include one or more of these components, e.g., the captured video is uploaded directly to the second storage device 150 by the camera 110.
In fig. 1, 9 location identifications are shown: 1-camera 110, 2-camera 110 to first server 120, 3-first server 120, 4-first server 120 to first storage device 130, 5-first storage device 130, 6-first storage device 130 to second storage server 140, 7-second storage server 140, 8-second storage server 140 to second storage device 150, 9-second storage device 150.
In practical applications, the camera 110 is usually recorded for 7 × 24 hours without interruption, and the recorded video often has a situation where there is no moving object in the video frame, and such an image is usually called a "still image". For example, in the morning, when there are no pedestrians or other moving objects on the road, the active camera records a "still image". The inventors of the present application have realized that still images generally do not have reference meaning, and a 7 x 24 hour security video file is very large due to the large number of "still images" it contains, resulting in:
1. when video files containing "still images" are transmitted over the internet or private lines, the network bandwidth of the user is wasted.
2. Saving files containing "still images" in a storage device for long periods of time wastes the user's cloud storage resources.
3. Reviewing historical video for a long time to view "still images" is also a waste of bandwidth and time for the user.
The video processing method, the video processing device, the video processing equipment and the computer-readable storage medium according to the embodiments of the disclosure provide a technical scheme for reducing the bandwidth occupied by the video in the process of uploading the video to the storage equipment through the internet and the storage space occupied by the storage equipment (such as a local storage equipment or a cloud storage equipment) by removing the still image in the video. The video processing method, apparatus, device and computer-readable storage medium according to embodiments of the present disclosure may be implemented at any one of 9 locations as shown in fig. 1. In other words, the video processing method, apparatus, device, and computer-readable storage medium according to embodiments of the present disclosure may be implemented at any location where the video generating apparatus is connected to the storage device via the internet. The video processing method according to embodiments of the present disclosure may be performed by a machine device at any one of 9 locations as shown in fig. 1. The video processing apparatus, device according to embodiments of the present disclosure may be implemented as a machine device at any one of the 9 locations shown in fig. 1, or any one of the camera 110, the first server 120, the first storage device 130, the second storage server 140, and the second storage device 150. The machine device may be any machine device or module having the functions capable of detecting and removing a still image in a video and storing the video from which the still image is removed in a storage device via the internet.
Fig. 2 shows a schematic flow diagram of a video processing method according to an exemplary embodiment of the present disclosure, which may be performed by a first device, wherein the processed video is sent by the first device to a second device via the internet. The first device and the second device may be any two devices connected via the internet, for example, the first device may be located at any one of the positions 1-9 shown in fig. 1 (for example, the first device may be the camera 110, the first server 120, or the first storage device 130 at the local end, or may be the second storage server 140 or the second storage device 150 at the cloud end), and the second device may be the second storage server 140 or the second storage device 150 shown in fig. 1. It should be understood that the first device is not limited to a local or cloud device and the second device is not limited to a cloud device. As shown in fig. 2, the exemplary video processing method may include the steps of:
s210, the first device detects a still image in the video.
In the prior art, when storing a video file to a storage device (such as a cloud) via the internet, a still image and a non-still image in the video are generally not distinguished, but the entire video file is not separately stored. In many occasions (for example, the above-mentioned security scene), the still image in the video is video data that does not contain too much useful information, and the data is often ignored when the video is called, so the still image is removed from the video and then stored in the storage device, which not only saves the uploading bandwidth and the storage space, but also saves the time of the reviewer. In the embodiments of the present disclosure, the still images in the video are distinguished and removed to reduce the occupied bandwidth and storage space, and save the time of the viewer. The "still image" is an image frame whose difference from a reference image frame is smaller than a predetermined threshold value.
In one example, a still image in a video may be detected by a motion detection algorithm. Motion detection is also commonly called motion detection and is commonly used for unattended surveillance video and automatic alarm. Images acquired by the camera according to different frame rates are calculated and compared by the CPU according to a certain algorithm, and when pictures are changed (if people walk, and the lens is moved), the calculated and compared result value exceeds a threshold value, and the system is instructed to automatically perform corresponding processing (such as picture shooting, alarm generation and the like).
In the embodiments of the present disclosure, the inventors of the present application creatively apply a motion detection algorithm to a captured video, and identify a still image in the video by applying the motion detection algorithm to a frame in the video to determine whether a motion detection event occurs. Fig. 3 shows a schematic flow chart of a specific implementation of identifying a still image in a video (step S210) according to an exemplary embodiment of the present disclosure, the execution subject of the method steps being the first device, which is omitted in the following description for the sake of brevity. In this example, as shown in fig. 3, step S210 may include the steps of:
s310, judging whether the image frame of the video has a movement detection event or not according to a movement detection algorithm.
The motion detection algorithm may generally include Background Subtraction (Background Subtraction), Temporal Difference (Temporal Difference), Optical Flow (Optical Flow), motion vector detection, etc., which may be applied in the embodiments of the present disclosure.
Through the motion detection algorithm, whether the image frame of the video has a motion detection event or not can be judged. The "movement detection event" refers to the detection of the movement of an object in an image frame by a movement detection algorithm, such that such an image frame is referred to as the occurrence of a movement detection event, and for an image frame in which the movement of an object is not detected, the occurrence of a movement detection event is referred to as the non-occurrence of a movement detection event.
In one example, each image frame of the video may be detected on a frame-by-frame basis. In another example, image frames of a video are detected at intervals of a predetermined number of frames (or a predetermined time), that is, whether a movement detection event occurs in an image frame is detected every predetermined number of frames (or time), and if no movement detection event occurs, all image frames between the two detections are regarded as having no movement detection event. In addition, if it is determined that a movement detection event occurs when the image frames are detected, all the image frames between the two detections are regarded as having the movement detection event, or subdivided (e.g., frame-by-frame) detection is performed on all the image frames between the two detections.
S320, determining the image frame without the motion detection event as a still image.
For an image frame in which no motion detection event occurs, it is indicated that no object motion occurs in the image frame, and thus such image frame is determined as a still image. And determining the image frame with the motion detection event as a non-still image.
Through step S210 or steps S310-S320, it can be determined which image frames of the video are still images.
Referring back to fig. 2, the example method then advances to step S220.
S220, the first device removes the detected still image from the video.
In step S220, the first device removes the image frames determined to be still images from the video so that the video contains only non-still images, i.e., images containing useful information. Thereafter, the example method proceeds to step S230.
S230, the first device transmits the video from which the still image is removed to the second device via the internet.
In step S230, the first device transmits the video from which the still image is removed to the second device to store the video. For example, a local device uploads a video from a local end to a cloud end, or a cloud storage server stores the video in the cloud end to a cloud storage device via the internet, and so on.
Compared with the original video, the size of the video without the still image is greatly reduced, so that the occupied bandwidth is reduced when the video is transmitted to the second device through the Internet, the occupied storage space in the storage device is also reduced, and the time for a viewer to look up the video is also saved.
In one example, the first device may also perform other processing on the video after removing the still image before transmitting the video to the second device. In one example, for each image frame of the video, only the difference data between the image frame and the reference image frame may be stored to further reduce the video size, save bandwidth resources, and save review time. Fig. 4 shows a schematic flow diagram of a video processing method according to an exemplary embodiment of the present disclosure, the execution subject of which may be the first device, which is omitted from the following description for the sake of brevity. In this embodiment, for each image frame, the first device only stores difference data between the image frame and the reference image frame. As shown in fig. 4, in this embodiment, an example video processing method may include the steps of:
s401, for each image frame of the video, a difference between the image frame and a reference image frame is calculated.
The reference image frame may be, for example, a background image of each image frame in the video, for example, a still image of a shooting area for which the camera is directed that does not contain a moving object, or an average background image frame extracted from each image frame in the video. The reference image frame may include one or more, and the corresponding reference image frame may be selected according to a difference in light and/or time of the target image frame.
S402, judging whether the difference value is larger than a preset threshold value.
The size of the difference between the image frame and the reference image frame represents the size of the difference between the image frame and the reference image frame, and the larger the difference is, the larger the difference is from the reference image frame, and the more possible the new object entering the shooting picture is included. Therefore, in the embodiment of fig. 4, a predetermined threshold value may be set in advance as a comparison criterion based on experience, experimental results, statistical data, and the like.
And S403, determining the image frame as a still image under the condition that the difference value is less than a preset threshold value.
When it is determined in step S402 that the difference is smaller than the predetermined threshold, it is determined in step S403 that the motion detection event has not occurred in the image frame, i.e., the image frame is determined as a still image, and the step proceeds to step S404. Otherwise, the step proceeds to S405 and S406.
S404, the image frame determined as the still image is removed from the video.
In step S404, the still image is removed from the video to save bandwidth and storage space.
S405, in the case where the difference is greater than or equal to a predetermined threshold, determining the image frame as a non-still image.
When it is determined in step S402 that the difference is not less than (i.e., greater than or equal to) the predetermined threshold, it is determined in step S405 that a movement detection event has occurred in the image frame, i.e., the image frame is determined as a non-still image, the step proceeds to S406.
S406, the difference between the image frame determined to be a non-still image and the reference image frame is saved as data of the image frame.
The image frame of the non-still image is saved as a difference value between it and the reference image frame in step S406 so that the data size of the video is further reduced.
S407, transmitting the difference value of the reference image frame and each image frame determined to be a non-still image to the second device via the internet as data of a video.
In step S407, the data of the entire video is compressed to contain only the non-still images, and only the reference image frames and the difference between each non-still image frame and the corresponding reference image frame are saved, so that the size of the video data becomes smaller, and the bandwidth and the storage space are more saved. For such stored video data, when it is to be referred to, data of each image frame may be restored based on data of the reference image frame and a difference between each non-still image frame and the corresponding reference image frame, and then video data may be formed for reference playback.
As described above, the video processing method according to the embodiments of the present disclosure may be implemented at various locations of a local end or a cloud end, and fig. 5 and fig. 6 respectively show schematic flowcharts of specific implementations of the video processing method according to exemplary embodiments, where fig. 5 shows an embodiment of the video processing method implemented at the local end (i.e., the first device is located at the local end), fig. 6 shows an embodiment of the video processing method implemented at the cloud end (i.e., the first device is located at the cloud end), and for brevity, an execution subject is omitted in the following description.
As shown in fig. 5, in this embodiment, the exemplary video processing method executed by the first device located at the local end may include the steps of:
s501, the video to be transmitted to the second device is saved in a first buffer area of the local end.
In one example, the first buffer is a FIFO (first in first out) buffer. In the example of fig. 5, it is shown that the video is first saved in the buffer, rather than directly detecting the still image in the video, because the uploading speed of the video is generally greater than the processing speed of the computer (e.g., the speed of detecting the still image) at the local end, and therefore, the received video data needs to be saved in the buffer first. It should be understood that the uploading speed of the video is comparable to or less than the processing speed of the computer, and the still image may be directly detected or otherwise processed without buffering in the buffer.
S502, reading the video in the first buffer area, and detecting the still image in the video.
In step S502, whether a motion detection event occurs in an image frame of the video may be detected through various motion detection algorithms as described above, so as to determine a still image in the video, which is not described herein again.
S503, the detected still image is removed from the video.
S504, the video with the still image removed is stored in a second buffer area of the local end.
In one example, the second buffer is a FIFO buffer. One reason for doing so, which is shown in the example of fig. 5 as saving the video with the still image removed to the buffer first, rather than transmitting it directly to the second device (e.g., uploading to the cloud), is to solve the problem of the difference between the processing speed of the computer (e.g., the speed at which the still image is detected) and the uploading speed of the video data to the second device. It should be understood that, when the uploading speed of the video to the second device is equal to or higher than the processing speed of the computer, the video processed in step S503 may be directly transmitted to the second device via the internet or otherwise processed without being buffered in the buffer.
And S505, transmitting the video stored in the second buffer area to the second device through the Internet.
The method and the device have the advantages that the static images in the video are removed at the local end and then uploaded to the second device (such as cloud equipment) through the Internet, so that the size of the uploaded video can be reduced, the occupation of uploading bandwidth is saved, the storage space occupied by the video is reduced, and the time of a video viewer can be saved.
Fig. 6 shows another embodiment in which the still image removal operation is implemented in the cloud. As shown in fig. 6, in this embodiment, an example video processing method performed by a first device in the cloud may include the steps of:
s601, receiving a video to be transmitted to the second device through the internet.
S602, detecting a still image in the received video.
In the example shown in fig. 6, the video uploaded by the local end is directly detected, rather than buffering the received video in a buffer as in fig. 5. This is because, in general, the speed of receiving video at the cloud is less than the processing speed of the computer (e.g., the speed of detecting still images), and therefore, a buffer may not be required. It should be understood that when the speed of uploading the cloud from the local is too high, a video buffer can be set in the cloud.
S603, the detected still image is removed from the video.
And S604, storing the video without the still image to a third buffer area at the cloud end.
In one example, the third buffer is a FIFO buffer. In fig. 6, the video with the still image removed is first saved in the buffer, and then read out to the cloud storage device in step 605. It should be understood that the video processed in step S603 may also be directly stored in the cloud storage device.
S605, the video stored in the third buffer is transmitted to the second device via the internet.
With the embodiment of fig. 6, the still image may be removed from the video at the cloud and then the video may be transmitted to a second device (e.g., a cloud storage device) via the internet, thereby saving transmission bandwidth and storage space.
In both the examples of fig. 5 and 6, still image detection is performed directly on video. In a different example, the motion detection compression function (i.e., the function of detecting a still image in a video by using the motion detection technology) is a function that can be turned on or off, after receiving a video, whether the video is at the local end or the cloud end, before detecting a still image in the video, whether the function is turned on or not is determined, if the function is set to be turned on, the still image detection is performed and the still image is removed (steps S502, S503 or S602, S603), otherwise, the steps are not performed.
Through the embodiments of the video processing method, the still images in the video can be removed before the video is transmitted from the first device to the second device via the internet, so that the size of the video data is reduced, the uploading bandwidth is saved, the occupation of the cloud storage space is reduced, and the reference time of a video viewer is saved. In some embodiments, the size of the video data may be further compressed by saving the reference image frame data and the difference of each non-still image frame from the corresponding reference image frame.
According to still another aspect of the present disclosure, there is also provided a video processing apparatus. The apparatus may perform the embodiments of the video processing method as described above, which may be implemented in a machine device at any one of the 9 locations as shown in fig. 1, or may be implemented as other apparatus connected with the machine device. Fig. 7 shows a schematic block diagram of the components of such an apparatus according to an exemplary embodiment of the present disclosure. As shown in the embodiment of fig. 7, the example video processing device 701 may include:
a still image detection module 710 configured to: detecting a still image in a video, wherein the still image is an image frame of which the difference value with a reference image frame is smaller than a preset threshold value;
a still image removal module 720 configured to: removing the detected still image from the video;
a transfer module 730 configured to: and transmitting the video with the still image removed to a second device to store the video, wherein the video processing device is connected with the second device through the Internet.
In the embodiment shown in fig. 7, the still image detection module 710 may further include:
a movement detection unit 711 configured to: judging whether the image frame of the video has a movement detection event or not according to a movement detection algorithm;
a determining unit 712 configured to: the image frames in which the motion detection event does not occur are determined as still images.
In the embodiment shown in fig. 7, the determining unit 712 may be further configured to: determining the image frame as a non-still image in a case where the difference is greater than or equal to a predetermined threshold,
the video processing apparatus 701 further comprises a compression module 740 configured to: the difference between the image frame determined to be a non-still image and a reference image frame is saved as data of the image frame.
In the embodiment shown in fig. 7, the compression module 740 may be further configured to:
transmitting the difference value of the reference image frame and each image frame determined to be a non-still image to a cloud storage device as data of a video.
According to still another aspect of the present disclosure, there is also provided a storage server including the embodiments of the video processing apparatus as described above. The storage server may perform embodiments of the video processing method as described above, which may be implemented at location 3 or 7 as shown in fig. 1.
According to still another aspect of the present disclosure, there is also provided a cloud storage device including the embodiments of the video processing apparatus as described above. The cloud storage device may perform embodiments of the video processing method as described above, which may be implemented at location 9 as shown in fig. 1.
The implementation processes and the relevant details of the functions and actions of each unit/module in the above device are specifically referred to the implementation processes of the corresponding steps in the above method embodiments, and are not described herein again.
The apparatus embodiments in the above embodiments may be implemented by hardware, software, firmware or a combination thereof, and may be implemented as a single apparatus, or may be implemented as a logic integrated system in which constituent units/modules are dispersed in one or more computing devices and each performs a corresponding function.
The units/modules constituting the apparatus in the above embodiments are divided according to logical functions, they may be subdivided according to logical functions, for example, the apparatus may be implemented by more or less units/modules. These constituent units/modules may be implemented by hardware, software, firmware or their combination, and they may be separate independent components or may be integrated units/modules combining multiple components to perform corresponding logical functions. The hardware, software, firmware, or combination thereof may include: separate hardware components, functional blocks implemented through programming, functional blocks implemented through programmable logic devices, etc., or a combination thereof.
According to an exemplary embodiment, the apparatus may be implemented as a machine device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the machine device to perform any one of the method embodiments as described above, or the computer program, when executed by the processor, causes the machine device to perform the functions as implemented by the constituent units/modules of the apparatus embodiments as described above.
The processor described in the above embodiments may refer to a single processing unit, such as a central processing unit CPU, or may be a distributed processor system comprising a plurality of distributed processing units/processors.
The memory described in the above embodiments may include one or more memories, which may be internal memories of the computing device, such as various memories of a transient or non-transient type, or external storage devices connected to the computing device through a memory interface.
Fig. 8 shows a schematic block diagram of an exemplary embodiment of such a machine device 801. As shown in fig. 8, the machine devices may include, but are not limited to: at least one processing unit 810, at least one memory unit 820, and a bus 830 that couples the various system components including the memory unit 820 and the processing unit 810.
The memory unit stores program code that may be executed by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present disclosure described in the description part of the above exemplary methods of the present specification. For example, the processing unit 810 can execute the steps shown in the flowcharts in the figures of the specification.
The storage unit 820 may include readable media in the form of volatile storage units, such as a random access storage unit (RAM)821 and/or a cache storage unit 822, and may further include a read only storage unit (ROM) 823.
Storage unit 820 may also include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The machine device may also communicate with one or more external devices 870 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the machine device, and/or with any devices (e.g., router, modem, etc.) that enable the machine device to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the machine device may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via network adapter 860. As shown, network adapter 860 communicates with the other modules of the machine through bus 830. It should be understood that although not shown in the figures, the machine device may be implemented using other hardware and/or software modules, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A video processing method, comprising:
the method comprises the steps that a first device detects a still image in a video, wherein the still image is an image frame of which the difference value with a reference image frame is smaller than a preset threshold value; the reference image frames comprise a plurality of reference image frames corresponding to different selections of light rays and/or time of the image frames of the video;
the first device removing the detected still image from the video;
the method comprises the steps that a first device transmits a video with a still image removed to a second device to store the video, wherein the first device and the second device are connected through the Internet;
the detecting a still image in a video includes:
for each image frame of the video, calculating a difference between the image frame and the reference image frame;
judging whether the difference value is larger than a preset threshold value or not;
determining the image frame as a still image in a case where the difference is less than a predetermined threshold;
determining the image frame as a non-still image in a case where the difference is greater than or equal to a predetermined threshold; storing the difference between the image frame determined to be a non-still image and a reference image frame as data of the image frame;
the first device transmitting the still image-removed video to the second device includes:
the first device transmits the difference value of the reference image frame and each image frame determined to be a non-still image to the second device as data of a video.
2. The method of claim 1, wherein the first device detecting the still image in the video comprises:
the first equipment judges whether a motion detection event occurs to an image frame of the video according to a motion detection algorithm;
the first device determines image frames in which no motion detection event has occurred as still images.
3. The method of claim 1 or 2, further comprising:
the first device saves the video to be transmitted to the second device to a first buffer,
wherein:
the first device calculates, for each image frame of the video, a difference between the image frame and a reference image frame, and determining whether the difference is greater than a predetermined threshold includes:
the first device reads the video in the first buffer and, for each image frame of the video, calculates a difference between the image frame and a reference image frame, determines whether the difference is greater than a predetermined threshold,
the first device transmitting the difference value of the reference image frame and each image frame determined to be a non-still image to a second device as data of a video includes:
the first equipment stores the video without the still image into a second buffer area;
the first device transmits the video stored in the second buffer to the second device via the internet.
4. The method of claim 1 or 2, further comprising:
a first device receives video over the internet for transmission to a second device,
wherein:
the first device transmitting the difference value of the reference image frame and each image frame determined to be a non-still image to a second device as data of a video includes:
the first equipment stores the video without the still image into a third buffer area;
the first device transmits the video stored in the third buffer to the second device via the internet.
5. A video processing apparatus, comprising:
a still image detection module comprising a movement detection unit and a determination unit, the movement detection unit configured to: calculating a difference value between each image frame of the video and a reference image frame, and judging whether the difference value is greater than a preset threshold value or not; the determination unit is configured to: determining the image frame as a still image in a case where the difference is less than a predetermined threshold; the reference image frames comprise a plurality of reference image frames corresponding to different selections of light rays and/or time of the image frames of the video;
the determining unit is further configured to: determining the image frame as a non-still image in a case where the difference is greater than or equal to a predetermined threshold,
a compression module configured to: storing the difference between the image frame determined to be a non-still image and a reference image frame as data of the image frame;
a still image removal module configured to: removing the detected still image from the video;
a transfer module configured to: transmitting the difference value of the reference image frame and each image frame determined to be a non-still image to a second device as data of a video to store the video, wherein the video processing apparatus is connected to the second device through the internet.
6. The apparatus of claim 5, wherein the still image detection module comprises:
a movement detection unit configured to: judging whether the image frame of the video has a movement detection event or not according to a movement detection algorithm;
a determination unit configured to: the image frames in which the motion detection event does not occur are determined as still images.
7. The apparatus of claim 5, wherein the video processing apparatus is configured to: saving video to be transmitted to the second device to the first buffer;
the movement detection unit is further configured to: reading the video in the first buffer area, calculating a difference value between each image frame of the video and a reference image frame, and judging whether the difference value is greater than a preset threshold value;
the transfer module is further configured to: and storing the video without the still image in a second buffer area, and transmitting the video stored in the second buffer area to the second equipment through the Internet.
8. The apparatus of claim 5, wherein the video processing apparatus is configured to: receiving a video to be transmitted to a second device through the internet;
the transfer module is further configured to: and storing the video without the still image in a third buffer area, and transmitting the video stored in the third buffer area to the second equipment through the Internet.
9. A cloud storage device comprising the video processing apparatus of any one of claims 5 to 8.
10. Storage server, characterized in that it comprises a video processing device according to any one of claims 5 to 8.
11. A machine device comprising a processor and a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of claims 1-4.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
CN201811330917.2A 2018-11-09 2018-11-09 Video processing method, device, equipment, server and readable storage medium Active CN110198475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811330917.2A CN110198475B (en) 2018-11-09 2018-11-09 Video processing method, device, equipment, server and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811330917.2A CN110198475B (en) 2018-11-09 2018-11-09 Video processing method, device, equipment, server and readable storage medium

Publications (2)

Publication Number Publication Date
CN110198475A CN110198475A (en) 2019-09-03
CN110198475B true CN110198475B (en) 2022-02-25

Family

ID=67751346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811330917.2A Active CN110198475B (en) 2018-11-09 2018-11-09 Video processing method, device, equipment, server and readable storage medium

Country Status (1)

Country Link
CN (1) CN110198475B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995558A (en) * 2019-12-17 2021-06-18 Tcl新技术(惠州)有限公司 Video file storage method and device and storage medium
CN113347385A (en) * 2020-03-02 2021-09-03 浙江宇视科技有限公司 Video stream transmission method, device, equipment and medium
CN115379233B (en) * 2022-08-16 2023-07-04 广东省信息网络有限公司 Big data video information analysis method and system
CN115618051B (en) * 2022-12-20 2023-03-21 楠楠聚智信息科技有限责任公司 Internet-based smart campus monitoring video storage method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0951182A1 (en) * 1998-04-14 1999-10-20 THOMSON multimedia S.A. Method for detecting static areas in a sequence of video pictures
CN101951503A (en) * 2009-07-09 2011-01-19 索尼公司 Image receiving apparatus, image method of reseptance and graphic transmission equipment
WO2017203789A1 (en) * 2016-05-25 2017-11-30 株式会社Nexpoint Difference image generation method, image restoration method, difference detection device, image restoration device, and monitoring method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040252977A1 (en) * 2003-06-16 2004-12-16 Microsoft Corporation Still image extraction from video streams
CN101166267B (en) * 2006-10-20 2010-11-24 鸿富锦精密工业(深圳)有限公司 Video monitoring recording system and method
CN102075735A (en) * 2011-01-14 2011-05-25 深圳职业技术学院 Video monitoring data transmission method and video monitoring terminal
CN103067702B (en) * 2012-12-06 2015-07-22 中通服公众信息产业股份有限公司 Video concentration method used for video with still picture
US10616613B2 (en) * 2014-07-17 2020-04-07 Panasonic Intellectual Property Management Co., Ltd. Recognition data generation device, image recognition device, and recognition data generation method
US9858679B2 (en) * 2014-11-04 2018-01-02 Hewlett-Packard Development Company, L.P. Dynamic face identification
CN105049810A (en) * 2015-08-07 2015-11-11 虎扑(上海)文化传播股份有限公司 Intelligent video monitoring method and video device
CN105049818A (en) * 2015-08-25 2015-11-11 北京丰华联合科技有限公司 Method for optimizing video data transmission
CN105959633A (en) * 2016-05-26 2016-09-21 北京志光伯元科技有限公司 Video transmission method and device
CN106937090A (en) * 2017-04-01 2017-07-07 广东浪潮大数据研究有限公司 The method and device of a kind of video storage
CN108391097A (en) * 2018-04-24 2018-08-10 冼汉生 A kind of video image method for uploading, device and computer storage media

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0951182A1 (en) * 1998-04-14 1999-10-20 THOMSON multimedia S.A. Method for detecting static areas in a sequence of video pictures
CN101951503A (en) * 2009-07-09 2011-01-19 索尼公司 Image receiving apparatus, image method of reseptance and graphic transmission equipment
WO2017203789A1 (en) * 2016-05-25 2017-11-30 株式会社Nexpoint Difference image generation method, image restoration method, difference detection device, image restoration device, and monitoring method

Also Published As

Publication number Publication date
CN110198475A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN110198475B (en) Video processing method, device, equipment, server and readable storage medium
US10123051B2 (en) Video analytics with pre-processing at the source end
US11496671B2 (en) Surveillance video streams with embedded object data
US11810350B2 (en) Processing of surveillance video streams using image classification and object detection
US11343544B2 (en) Selective use of cameras in a distributed surveillance system
US20210409792A1 (en) Distributed surveillance system with distributed video analysis
JP4440863B2 (en) Encoding / decoding device, encoding / decoding method, encoding / decoding integrated circuit, and encoding / decoding program
US11503381B2 (en) Distributed surveillance system with abstracted functional layers
CN109922366B (en) Equipment parameter adjusting method, device, equipment and medium
WO2019218147A1 (en) Method, apparatus and device for transmitting surveillance video
US20170118528A1 (en) System and method for adaptive video streaming
CN112911299B (en) Video code rate control method and device, electronic equipment and storage medium
US20210409817A1 (en) Low latency browser based client interface for a distributed surveillance system
EP3276967A1 (en) Systems and methods for adjusting the frame rate of transmitted video based on the level of motion in the video
US11115619B2 (en) Adaptive storage between multiple cameras in a video recording system
CN109886234B (en) Target detection method, device, system, electronic equipment and storage medium
CN109982017B (en) System and method for intelligent recording of video data streams
CN113613058A (en) Local storage method, equipment and medium for network video stream
Kwon et al. Design and Implementation of Video Management System Using Smart Grouping
US11463739B2 (en) Parameter based load balancing in a distributed surveillance system
US20240022684A1 (en) Dynamic adjustment method and system for adjusting frame per second
US11509832B2 (en) Low light surveillance system with dual video streams
CN111355933B (en) Gstreamer framework timely detection method and server
KR20220029031A (en) Apparatus to store image for cctv and method to store image using the same
US20240007744A1 (en) Audio Sensors for Controlling Surveillance Video Data Capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant