CN111866443A - Video stream data storage method, device, system and storage medium - Google Patents

Video stream data storage method, device, system and storage medium Download PDF

Info

Publication number
CN111866443A
CN111866443A CN201910340860.2A CN201910340860A CN111866443A CN 111866443 A CN111866443 A CN 111866443A CN 201910340860 A CN201910340860 A CN 201910340860A CN 111866443 A CN111866443 A CN 111866443A
Authority
CN
China
Prior art keywords
image
frame image
current frame
deep learning
learning network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910340860.2A
Other languages
Chinese (zh)
Inventor
黄河
马琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Haiqing Zhiying Technology Co ltd
Original Assignee
Shenzhen Haiqing Zhiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Haiqing Zhiying Technology Co ltd filed Critical Shenzhen Haiqing Zhiying Technology Co ltd
Priority to CN201910340860.2A priority Critical patent/CN111866443A/en
Publication of CN111866443A publication Critical patent/CN111866443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The application relates to a video stream data storage method, device, system and storage medium. The method comprises the following steps: taking video stream data; the video stream data comprises a plurality of frames of images which are arranged in sequence; extracting image characteristics of a current frame image through a processor; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image; encoding the current frame image through a processor to obtain encoded data; and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into a storage medium. By adopting the method, redundant image information can be deleted, the bandwidth transmission cost is reduced, and the utilization rate of the storage medium is improved.

Description

Video stream data storage method, device, system and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method, an apparatus, a system, and a storage medium for storing video stream data.
Background
Based on the requirements of video transmission or video monitoring, data storage is often required for video data. With the development of the internet, the video storage technology has been widely applied, especially in the field of video monitoring. The video storage technology is to store a plurality of frames of images in sequence in a video into a storage medium, so as to realize the storage and application of video data.
At present, most video storage technologies mainly adopt an image coding technology to compress video data, and then store the compressed video data; however, the image coding technology cannot meet the increasingly enhanced video high definition requirement, and still needs higher bandwidth transmission cost and hardware storage cost.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a video stream data storage method, apparatus, system and storage medium.
A method of video stream data storage, the method comprising:
acquiring video stream data; the video stream data comprises a plurality of frames of images which are arranged in sequence;
extracting image characteristics of a current frame image through a processor;
comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image;
encoding the current frame image through the processor to obtain encoded data;
and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into a storage medium.
In one embodiment, the processor comprises a deep learning network controller and an encoder connected in parallel; the step of comparing whether the image characteristics of the current frame image are consistent with the image characteristics of the previous frame image comprises:
Comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image;
the storing the encoded data corresponding to the current frame image to the storage medium includes:
generating, by the deep learning network controller, a storage instruction;
and sending the storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
In one embodiment, the processor comprises a deep learning network controller and an encoder connected in series; the step of comparing whether the image characteristics of the current frame image are consistent with the image characteristics of the previous frame image comprises:
comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image;
the encoding the current frame image by the processor to obtain encoded data includes:
generating, by the deep learning network controller, a storage instruction;
and sending the storage instruction to the encoder to instruct the encoder to encode the current frame image to obtain encoded data.
In one embodiment, the training step of the deep learning network model includes:
acquiring a sample video stream and a corresponding known label; the sample video stream comprises a plurality of frames of sample images arranged in sequence;
extracting the characteristics of the sample image through a deep learning network model to be trained to obtain the reference characteristics of the sample image;
determining a loss value between the reference feature and the corresponding known label;
and adjusting model parameters in the deep learning network model according to the loss value until the determined loss value reaches a training stopping condition.
In one embodiment, the method further comprises:
generating a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image;
storing the video feature stream to the storage medium.
In one embodiment, the method further comprises:
when the image characteristics of the current frame image are consistent with those of the previous frame image, deleting the coded data corresponding to the current frame image;
and storing the image characteristics of the current frame image to the storage medium.
A video stream data storage system, the system comprising: an optical sensor, a processor, and a storage medium; the optical sensor and the storage medium are respectively connected with the processor;
The optical sensor is used for acquiring video stream data; the video stream data comprises a plurality of frames of images;
the processor is used for extracting the image characteristics of the current frame image; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image; coding the current frame image to obtain coded data; and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into the storage medium.
In one embodiment, the processor further comprises a deep learning network controller and an encoder connected in parallel;
the deep learning network controller is used for comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image or not; if the image characteristics of the current frame image are inconsistent with those of the previous frame image, generating a storage instruction through the deep learning network controller; sending the store instruction to the encoder;
and the encoder is used for storing the coded data of the current frame image to a storage medium according to the storage instruction.
In one embodiment, the processor further comprises a deep learning network controller and an encoder connected in series;
the deep learning network controller is used for comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image or not;
the deep learning network controller is further used for generating a storage instruction; sending the store instruction to the encoder;
and the encoder is also used for encoding the current frame image according to the storage instruction to obtain encoded data.
In one embodiment, the system further comprises a memory connected with the processor, wherein the memory stores a deep learning network model for loading to the deep learning network controller to run; the deep learning network model is obtained by training the deep learning network model based on a sample image set constructed according to multiple frames of sample images in the obtained sample video stream data.
In one embodiment, the deep learning network controller is further configured to generate a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image; storing the video feature stream to the storage medium.
In one embodiment, the encoder is further configured to delete the encoded data corresponding to the current frame image when the image characteristics of the current frame image are consistent with those of the previous frame image; and storing the image characteristics of the current frame image to the storage medium.
A video stream data storage apparatus, the apparatus comprising:
the video stream data acquisition module is used for acquiring video stream data; the video stream data comprises a plurality of frames of images;
the image characteristic extraction module is used for extracting the image characteristics of the current frame image through the processor;
the image characteristic comparison module is used for comparing whether the image characteristic of the current frame image is consistent with the image characteristic of the previous frame image;
the image coding module is used for coding the current frame image through the processor to obtain coded data;
and the image storage module is used for storing the coded data corresponding to the current frame image to a storage medium if the image characteristics of the current frame image are inconsistent with the image characteristics of the previous frame image.
In one embodiment, the image feature comparison module is further configured to compare whether the image features of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image features of the previous frame image; the image storage module is also used for generating a storage instruction through the deep learning network controller; and sending the storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned video stream data storage method.
The video stream data storage method, the device, the system and the storage medium send the acquired video stream data to the processor, extract the image characteristics of the current frame image in the video stream data through the processor, and judge whether the current frame image and the previous frame image change or not according to the image characteristics. The processor is used for coding the current frame image in real time, and storing the coded data corresponding to the current frame image into the storage medium when the current frame image and the previous frame image are changed.
Drawings
Fig. 1 is a diagram illustrating an application scenario of a video stream data storage method according to an embodiment;
FIG. 2 is a flow chart illustrating a method for storing video stream data according to an embodiment;
FIG. 3 is a schematic flow chart showing a video stream data storage step in another embodiment;
FIG. 4a is a schematic diagram of a video stream data storage system in one embodiment;
FIG. 4b is a schematic diagram of a video stream data storage system in another embodiment;
FIG. 5 is a block diagram showing the construction of a video stream data storage apparatus according to an embodiment;
fig. 6 is a block diagram showing the construction of a video stream data storage apparatus in another embodiment;
fig. 7 is a block diagram showing the construction of a video stream data storage apparatus in still another embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The video stream data storage method provided by the application can be applied to the application environment shown in fig. 1. The application environment comprises a video stream data storage system comprising an optical sensor 102, a processor 104, and a storage medium 106. The optical sensor 102 and the storage medium 106 are each coupled to the processor 104. The optical sensor 102 is a sensor for performing photosensitive imaging on a target object according to an optical principle, and is a sensor for acquiring video stream data, such as a camera. The optical sensor 102 may specifically be one or more than one sensor. The processor 104 may be a single-core or multi-core processor for performing image feature extraction and encoding on each frame of image in the video stream data. The storage medium 106 is used to store video stream data. The storage medium 106 may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory.
In one embodiment, as shown in fig. 2, a video stream data storage method is provided, which is exemplified by the application of the method to the video stream data storage system in fig. 1, and includes the following steps:
step 202, acquiring video stream data; the video stream data includes a plurality of frames of images in sequence.
Wherein the video is composed of a plurality of frames with images arranged in sequence. Video streaming refers to the form of transmission of video data, transmitted through a stable and continuous stream. The video stream data comprises a plurality of frames of images which are arranged in sequence, and the plurality of frames of images are transmitted through the video stream in sequence; the arrangement order of the pictures in the video stream data can distinguish the current frame picture from the previous frame picture.
Specifically, the video stream data may be obtained by shooting in real time by an optical sensor, where the optical sensor is a sensor that performs photosensitive imaging on a target object according to an optical principle. At least one optical sensor is arranged in a required shooting or monitoring area, video stream data is collected through the optical sensor and sent to the processor, and the collected video stream data can also be stored in the optical sensor.
In one embodiment, the video stream data may be collected by the optical sensor in real time or periodically.
In one embodiment, the video stream data may be data pre-stored in a storage medium, the pre-stored video stream data being transmitted to the processor. The transmission mode may be a wireless transmission mode or a wired transmission mode, such as a radio frequency transmission mode, an NFC (near field communication) transmission mode, a bluetooth transmission mode, or a wireless network transmission mode.
And step 204, extracting the image characteristics of the current frame image through the processor.
The image features are quantized data for distinguishing the difference between images, and include color features, texture features, shape features, and spatial relationship features of the images.
Specifically, based on the acquired video stream data, the processor performs feature extraction on the current frame image by using a feature extraction method to obtain image features of the current frame image. The feature extraction is to project a plurality of frames of images in the acquired video stream data to a low-dimensional feature space to obtain low-dimensional sample features which can reflect the essence of the images or distinguish the images. The feature extraction can adopt a deep learning algorithm to extract image features.
And step 206, comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image.
Specifically, according to the acquired video stream data, image feature extraction is performed on each frame of image in the video stream data through a processor, so that the image feature of each frame of image is obtained. And comparing the image characteristics of the current frame image and the image characteristics of the previous frame image by the processor according to the comparison condition to obtain a comparison result. The comparison condition is to judge whether the image feature of the current frame image is consistent with the image feature of the previous frame image. That is, the current frame image is compared with the previous frame image, and whether to store the current frame image is judged according to the comparison result. For example, in an environment where a certain monitoring background is not changed, a person a appears at a certain position in the current frame image, and a person appearing in the same position as the current frame image also appears at the same position in the previous frame image, and the posture of the person and the position appearing in the image are the same, the image feature of the current frame image is consistent with the image feature of the previous frame image. The previous frame image includes all frame images prior to the current frame image, for example, the previous frame image.
In one embodiment, the processor compares whether the similarity between the image feature of the current frame image and the image feature of the previous frame image is within a preset range. The processor judges whether the similarity between the current frame image and the previous frame image is within a preset range through the image characteristics so as to determine to store or delete the coded data corresponding to the current frame image.
And step 208, encoding the current frame image through the processor to obtain encoded data.
Image coding, which may also be referred to as image compression, removes redundant data by a specific compression technique to reduce the amount of data required to represent a digital image, so as to facilitate storage and transmission of the image, while satisfying a certain fidelity requirement. The image coding techniques may be JPEG (Joint Photographic Experts Group), MPEG (Moving Picture Experts Group), H265(High Efficiency video coding), and MJPEG (Motion Joint Photographic Experts Group).
Specifically, the processor encodes the current frame image by using an image encoding technique, thereby obtaining encoded data after encoding. The encoded data is the amount of data required for the digital image of the current frame image to remain after compression. The image coding technique may employ a huffman coding technique, a predictive coding technique, a transform coding technique, and a block coding technique. The image coding technique is a technique for expressing an original image pixel matrix with a small amount of data with loss or without loss.
Step 210, if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the encoded data corresponding to the current frame image in a storage medium.
Storage medium refers, inter alia, to a carrier for storing data, including non-volatile and/or volatile memory.
Specifically, the processor compares whether the image characteristics of the current frame image are consistent with those of the previous frame image, and if the image characteristics of the current frame image are not consistent with those of the previous frame image, the processor transmits the encoded data corresponding to the current frame image to the storage medium for storage, and the storage medium receives the encoded data and stores the encoded data according to a storage data format.
In one embodiment, if the image feature of the current frame image is consistent with the image feature of the previous frame image, the processor deletes the encoded data corresponding to the current frame image and stores the image feature of the current frame image in the storage medium.
Under the condition that the image characteristics of the current frame image are consistent with those of the previous frame image, deleting the coded data corresponding to the current frame image so as to achieve the aim of deleting redundant images in the video stream data; furthermore, the image characteristics of the current frame image are stored in the storage medium, and only the corresponding image characteristics are needed to be stored aiming at the redundant image, so that a verification basis is provided for image deletion, the bandwidth transmission cost is reduced, and the utilization rate of the storage medium is improved.
In the above embodiment, the acquired video stream data is sent to the processor, the image feature of the current frame image in the video stream data is extracted by the processor, and whether the current frame image and the previous frame image change or not is determined according to the image feature. The processor is used for coding the current frame image in real time, and storing the coded data corresponding to the current frame image into the storage medium when the current frame image and the previous frame image are changed.
In one embodiment, a processor includes a deep learning network controller and an encoder connected in parallel; comparing whether the image features of the current frame image are consistent with those of the previous frame image comprises: comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image; storing the encoded data corresponding to the current frame image to a storage medium includes: generating a storage instruction through a deep learning network controller; and sending a storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
Wherein the deep learning network controller has a control structure adapted to the deep learning network model for optimizing a processing logic of the deep learning network model when each frame of image in the video stream is processed by the deep learning network model. The deep learning network model is one of neural network models and comprises an unsupervised learning network model and a supervised learning network model; the deep learning network model may be a convolutional neural network model and a cyclic neural network model.
Specifically, the processor includes a deep learning network controller and an encoder, and the deep learning network controller and the encoder are connected in parallel. The deep learning network controller receives the acquired video stream data, and performs feature extraction on each frame of image in the received video stream data through a deep learning network model operated on the deep learning network controller to obtain image features of a current frame of image, image features of a previous frame of image and image features of a later frame of image. Further, the deep learning network controller compares whether the image characteristics of the current frame image are consistent with those of the previous frame image; the deep learning network controller may also compare whether the similarity of the image feature of the current frame image and the image feature of the previous frame image is within a preset range.
If the image characteristics of the current frame image are inconsistent with the image characteristics of the previous frame image or the similarity of the current frame image and the previous frame image is not within a preset range, the deep learning network controller generates a storage instruction for the current frame image according to the comparison result and transmits the generated storage instruction to the encoder; the encoder receives a storage instruction and stores the encoded data of the current frame image to a storage medium according to the instruction of the storage instruction.
In one embodiment, if the image features of the current frame image are consistent with or similar to those of the previous frame image within a preset range, the deep learning network controller generates a deletion instruction for the current frame image according to the comparison result, and transmits the generated deletion instruction to the encoder; the encoder receives the deleting instruction, deletes the encoded data of the current frame image according to the instruction of the deleting instruction, and stores the image characteristics of the current frame image to the storage medium.
In the embodiment, the deep learning network controller and the encoder which are connected in parallel are used for respectively carrying out corresponding processing on the acquired video stream data, so that the data processing efficiency of a computer is improved; the method comprises the steps that a storage instruction of a current frame image is generated according to the fact that image characteristics of the current frame image are inconsistent with those of a previous frame image through comparison of a deep learning network controller, and therefore an encoder is instructed to store coded data corresponding to the current frame image into a storage medium.
In one embodiment, a processor includes a deep learning network controller and an encoder connected in series; comparing whether the image features of the current frame image are consistent with those of the previous frame image comprises: comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image; encoding the current frame image through a processor to obtain encoded data, wherein the encoding comprises: generating a storage instruction through a deep learning network controller; and sending the storage instruction to an encoder to instruct the encoder to encode the current frame image to obtain encoded data.
Specifically, the processor includes a deep learning network controller and an encoder, and the deep learning network controller and the encoder are serially connected, and the encoder is serially connected after the deep learning network controller. The deep learning network controller receives the acquired video stream data, and performs feature extraction on each frame of image in the received video stream data through a deep learning network model operated on the deep learning network controller to obtain image features of a current frame of image, image features of a previous frame of image and image features of a later frame of image. Further, the deep learning network controller compares whether the image characteristics of the current frame image are consistent with those of the previous frame image; the deep learning network controller may also compare whether the similarity of the image feature of the current frame image and the image feature of the previous frame image is within a preset range.
If the image characteristics of the current frame image are inconsistent with the image characteristics of the previous frame image or the similarity of the current frame image and the previous frame image is not within a preset range, the deep learning network controller generates a storage instruction for the current frame image according to the comparison result and transmits the generated storage instruction to the encoder; the encoder receives a storage instruction, and encodes the current frame image according to the instruction of the storage instruction to obtain encoded data; and transmitting the encoded data and the image characteristics corresponding to the current frame image to a storage medium for storage.
In one embodiment, if the image features of the current frame image are consistent with or similar to those of the previous frame image within a preset range, the deep learning network controller generates a deletion instruction for the current frame image according to the comparison result, and transmits the generated deletion instruction to the encoder; the encoder receives the deleting instruction and does not encode the current frame image according to the instruction of the deleting instruction; and storing the image characteristics corresponding to the current frame in a storage medium.
In this embodiment, through the depth learning network controller and the encoder connected in series, the depth learning network controller compares the image characteristics of the current frame image with those of the previous frame image, generates a storage instruction for the current frame image on the premise that the image characteristics are not consistent, and sends the storage instruction to the encoder, instructs the encoder to encode the current frame image to generate encoded data, and controls the encoder to process the current frame image according to the output result of the depth learning network controller, so that the processing resources of the encoder are reduced, and the storage time of video stream data is prolonged.
In one embodiment, the training step of the deep learning network model comprises: acquiring sample video stream data and a corresponding known label; the sample video stream data comprises a plurality of frames of sample images which are arranged in sequence; extracting the characteristics of the sample image through a deep learning network model to be trained to obtain the reference characteristics of the sample image; determining a loss value between the reference feature and the corresponding known tag; and adjusting model parameters in the deep learning network model according to the loss value until the determined loss value reaches a training stop condition.
Wherein the sample video stream data includes a plurality of frames of sample images arranged in sequence. The reference feature is a sample image feature obtained after the deep learning network model to be trained performs characteristic extraction on the sample image. As the number of training times of the deep learning network model increases, the reference features also change.
The training stopping condition is that the loss values of the reference features and the known labels in each sample image in the video stream data reach a preset range, namely the prediction accuracy of each sample image in the video stream data reaches the preset range.
In particular, a deep learning network controller obtains sample video stream data and corresponding known tags, wherein the sample video stream data includes an ordered sequence. Extracting the image characteristics of the current frame image and the image characteristics of the previous frame image through a to-be-trained deep learning network model running on a deep learning network controller, comparing whether the image characteristics of the current frame image are consistent with the image characteristics of the previous frame image, obtaining the reference characteristics of the current frame image according to the comparison result, and if the comparison result is consistent, the reference characteristics of the current frame image are consistent with the known label. Further, the deep learning network controller determines the loss values of the reference features and the known tags by adopting a loss function; and adjusting model parameters in the deep learning network model according to the loss value until the obtained loss value reaches a training stop condition.
In the embodiment, the deep learning network model running on the deep learning network controller is trained according to the sample video stream data, so that the trained deep learning network model is obtained, the video stream data is better subjected to feature extraction and comparison, the accuracy of feature extraction is improved, the comparison result of image features is more accurate, and the accuracy of image storage is improved.
In one embodiment, as shown in fig. 3, the method further comprises: step 302, generating a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image. And step 304, storing the video feature stream to a storage medium.
The video feature stream is to extract image features of each frame of image in video stream data to obtain image features of video stream data arranged in sequence. The video feature stream contains the precedence relationship between each frame of image.
Specifically, after extracting image features of a current frame image, the deep learning network controller generates a feature stream based on the extracted image features of the current frame image, and sequentially generates the feature stream for the image features of each frame image in the video stream data, so as to obtain a video feature stream corresponding to the video stream data arranged in the image sequence.
The deep learning network controller may also generate a video feature stream corresponding to the data stream data arranged in sequence based on the image features corresponding to each frame of image after extracting the image features of all the images in the video stream data.
In this embodiment, a video feature stream corresponding to video stream data is generated based on image features corresponding to each frame of image, and the generated video feature stream is stored, so that all image features of the video stream data are saved, a deletion basis is provided for deleted images in the video stream data, and the utilization rate of a storage space is improved on the premise of improving the information of the video stream data.
It should be understood that although the various steps in the flow charts of fig. 2-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4a, a video stream data storage system includes: an optical sensor, a processor, and a storage medium; the optical sensor and the storage medium are respectively connected with the processor. The optical sensor is used for acquiring video stream data; the video stream data comprises a plurality of frames of images; the processor is used for extracting the image characteristics of the current frame image; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image; coding the current frame image to obtain coded data; and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into a storage medium.
In one embodiment, the processor further comprises a deep learning network controller and an encoder connected in parallel; the deep learning network controller is used for comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image or not; if the image characteristics of the current frame image are inconsistent with those of the previous frame image, generating a storage instruction through a deep learning network controller; sending a store instruction to an encoder; the encoder is used for storing the coded data of the current frame image to a storage medium according to the storage instruction.
In one embodiment, as shown in FIG. 4b, the processor further comprises a deep learning network controller and an encoder connected in series; the deep learning network controller is used for comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image or not; the deep learning network controller is also used for generating a storage instruction; sending a store instruction to an encoder; the encoder is further used for encoding the current frame image according to the storage instruction to obtain encoded data.
As can be seen from fig. 4b, after the encoder is serially connected to the deep learning network controller, the deep learning network controller receives the video stream data sent by the optical sensor, performs feature extraction on each frame of image in the received video stream data through a deep learning network model running on the deep learning network controller, and determines whether to store the current frame of image by comparing the image features of the current frame of image with those of the previous frame of image.
In one embodiment, the system further comprises a memory connected with the processor, wherein the memory stores a deep learning network model for loading to the deep learning network controller to run; the deep learning network model is a sample image set constructed according to multiple frames of sample images in the obtained sample video stream data, and the deep learning network model is trained based on the sample image set.
In one embodiment, the deep learning network controller is further configured to generate a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image; the video feature stream is stored to a storage medium.
In one embodiment, the encoder is further configured to delete the encoded data corresponding to the current frame image when the image characteristics of the current frame image are consistent with those of the previous frame image; and storing the image characteristics of the current frame image to a storage medium.
In this embodiment, the acquired video stream data is sent to the processor, the image feature of the current frame image in the video stream data is extracted by the processor, and whether the current frame image and the previous frame image change or not is determined according to the image feature. The processor is used for coding the current frame image in real time, and storing the coded data corresponding to the current frame image into the storage medium when the current frame image and the previous frame image are changed.
In one embodiment, as shown in fig. 5, there is provided a video stream data storage apparatus including: a video stream data obtaining module 502, an image feature extracting module 504, an image feature comparing module 506, an image coding module 508 and an image storing module 510, wherein:
a video stream data obtaining module 502, configured to obtain video stream data; the video stream data comprises a plurality of frames of images;
an image feature extraction module 504, configured to extract, by a processor, an image feature of a current frame image;
an image feature comparison module 506, configured to compare whether an image feature of the current frame image is consistent with an image feature of a previous frame image;
an image encoding module 508, configured to encode the current frame image through the processor to obtain encoded data;
the image storage module 510 is configured to store, in a storage medium, the encoded data corresponding to the current frame image if the image feature of the current frame image is inconsistent with the image feature of the previous frame image.
In one embodiment, the image feature comparison module is further configured to compare whether the image feature of the current frame image extracted by the deep learning network model on the deep learning network controller is consistent with the image feature of the previous frame image; the image storage module is also used for generating a storage instruction through the deep learning network controller; and sending a storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
In one embodiment, the image feature comparison module is further configured to compare whether the image feature of the current frame image extracted by the deep learning network model on the deep learning network controller is consistent with the image feature of the previous frame image; generating a storage instruction through a deep learning network controller; and sending the storage instruction to an encoder to instruct the encoder to encode the current frame image to obtain encoded data.
In one embodiment, the image feature comparison module is further configured to obtain sample video stream data and a corresponding known tag; the sample video stream data comprises a plurality of frames of sample images which are arranged in sequence; extracting the characteristics of the sample image through a deep learning network model to be trained to obtain the reference characteristics of the sample image; determining a loss value between the reference feature and the corresponding known tag; and adjusting model parameters in the deep learning network model according to the loss value until the determined loss value reaches a training stop condition.
In one embodiment, as shown in fig. 6, the apparatus 600 further includes a video feature stream generating module 512 and a video feature stream storing module 514, wherein:
a video feature stream generating module 512, configured to generate a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image;
A video feature stream storage module 514, configured to store the video feature stream to a storage medium.
In one embodiment, as shown in fig. 7, the apparatus 700 further includes an encoded data deleting module 516, where: the encoded data deleting module 516 is configured to delete the encoded data corresponding to the current frame image when the image feature of the current frame image is consistent with the image feature of the previous frame image; and storing the image characteristics of the current frame image into a storage medium.
In this embodiment, the acquired video stream data is sent to the processor, the image feature of the current frame image in the video stream data is extracted by the processor, and whether the current frame image and the previous frame image change or not is determined according to the image feature. The processor is used for coding the current frame image in real time, and storing the coded data corresponding to the current frame image into the storage medium when the current frame image and the previous frame image are changed.
For specific limitations of the video stream data storage apparatus, reference may be made to the above limitations of the video stream data storage method, which will not be described herein again. The respective modules in the video stream data storage apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes an optical sensor, a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video stream data storage method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring video stream data; the video stream data comprises a plurality of frames of images which are arranged in sequence; extracting image characteristics of a current frame image through a processor; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image; encoding the current frame image through a processor to obtain encoded data; and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into a storage medium.
In one embodiment, a processor includes a deep learning network controller and an encoder connected in parallel; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image, and when the processor executes the computer program, the following steps are also realized: comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image; storing the coded data corresponding to the current frame image in a storage medium, wherein the processor further realizes the following steps when executing the computer program: generating a storage instruction through a deep learning network controller; and sending a storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
In one embodiment, a processor includes a deep learning network controller and an encoder connected in series; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image, and when the processor executes the computer program, the following steps are also realized: comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image; the processor encodes the current frame image to obtain encoded data, and the processor executes the computer program to further realize the following steps: generating a storage instruction through a deep learning network controller; and sending the storage instruction to an encoder to instruct the encoder to encode the current frame image to obtain encoded data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring sample video stream data and a corresponding known label; the sample video stream data comprises a plurality of frames of sample images which are arranged in sequence; extracting the characteristics of the sample image through a deep learning network model to be trained to obtain the reference characteristics of the sample image; determining a loss value between the reference feature and the corresponding known tag; and adjusting model parameters in the deep learning network model according to the loss value until the determined loss value reaches a training stop condition.
In one embodiment, the processor, when executing the computer program, further performs the steps of: generating a video feature stream corresponding to video stream data based on the image features corresponding to each frame of image; the video feature stream is stored to a storage medium.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the image characteristics of the current frame image are consistent with those of the previous frame image, deleting the coded data corresponding to the current frame image; and storing the image characteristics of the current frame image into a storage medium.
In this embodiment, the acquired video stream data is sent to the processor, the image feature of the current frame image in the video stream data is extracted by the processor, and whether the current frame image and the previous frame image change or not is determined according to the image feature. The processor is used for coding the current frame image in real time, and storing the coded data corresponding to the current frame image into the storage medium when the current frame image and the previous frame image are changed.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring video stream data; the video stream data comprises a plurality of frames of images which are arranged in sequence; extracting image characteristics of a current frame image through a processor; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image; encoding the current frame image through a processor to obtain encoded data; and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into a storage medium.
In one embodiment, a processor includes a deep learning network controller and an encoder connected in parallel; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image, and realizing the following steps when the computer program is executed by the processor: comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image; storing the encoded data corresponding to the current frame image in a storage medium, wherein when the computer program is executed by a processor, the computer program implements the following steps: generating a storage instruction through a deep learning network controller; and sending a storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
In one embodiment, a processor includes a deep learning network controller and an encoder connected in series; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image, and realizing the following steps when the computer program is executed by the processor: comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image; encoding the current frame image by a processor to obtain encoded data, wherein the computer program when executed by the processor implements the steps of: generating a storage instruction through a deep learning network controller; and sending the storage instruction to an encoder to instruct the encoder to encode the current frame image to obtain encoded data.
In one embodiment, the computer program when executed by the processor implements the steps of: acquiring sample video stream data and a corresponding known label; the sample video stream data comprises a plurality of frames of sample images which are arranged in sequence; extracting the characteristics of the sample image through a deep learning network model to be trained to obtain the reference characteristics of the sample image; determining a loss value between the reference feature and the corresponding known tag; and adjusting model parameters in the deep learning network model according to the loss value until the determined loss value reaches a training stop condition.
In one embodiment, the computer program when executed by the processor implements the steps of: generating a video feature stream corresponding to video stream data based on the image features corresponding to each frame of image; the video feature stream is stored to a storage medium.
In one embodiment, the computer program when executed by the processor implements the steps of: when the image characteristics of the current frame image are consistent with those of the previous frame image, deleting the coded data corresponding to the current frame image; and storing the image characteristics of the current frame image into a storage medium.
In this embodiment, the acquired video stream data is sent to the processor, the image feature of the current frame image in the video stream data is extracted by the processor, and whether the current frame image and the previous frame image change or not is determined according to the image feature. The processor is used for coding the current frame image in real time, and storing the coded data corresponding to the current frame image into the storage medium when the current frame image and the previous frame image are changed.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method of video stream data storage, the method comprising:
acquiring video stream data; the video stream data comprises a plurality of frames of images which are arranged in sequence;
extracting image characteristics of a current frame image through a processor;
comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image;
encoding the current frame image through the processor to obtain encoded data;
And if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into a storage medium.
2. The method of claim 1, wherein the processor comprises a deep learning network controller and an encoder connected in parallel; the step of comparing whether the image characteristics of the current frame image are consistent with the image characteristics of the previous frame image comprises:
comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image;
the storing the encoded data corresponding to the current frame image to the storage medium includes:
generating, by the deep learning network controller, a storage instruction;
and sending the storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
3. The method of claim 1, wherein the processor comprises a serially connected deep learning network controller and encoder; the step of comparing whether the image characteristics of the current frame image are consistent with the image characteristics of the previous frame image comprises:
Comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image;
the encoding the current frame image by the processor to obtain encoded data includes:
generating, by the deep learning network controller, a storage instruction;
and sending the storage instruction to the encoder to instruct the encoder to encode the current frame image to obtain encoded data.
4. The method of claim 2 or 3, wherein the training step of the deep learning network model comprises:
acquiring a sample video stream and a corresponding known label; the sample video stream comprises a plurality of frames of sample images arranged in sequence;
extracting the characteristics of the sample image through a deep learning network model to be trained to obtain the reference characteristics of the sample image;
determining a loss value between the reference feature and the corresponding known label;
and adjusting model parameters in the deep learning network model according to the loss value until the determined loss value reaches a training stopping condition.
5. The method of claim 1, further comprising:
Generating a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image;
storing the video feature stream to the storage medium.
6. The method of claim 1, further comprising:
when the image characteristics of the current frame image are consistent with those of the previous frame image, deleting the coded data corresponding to the current frame image;
and storing the image characteristics of the current frame image to the storage medium.
7. A video stream data storage system, the system comprising: an optical sensor, a processor, and a storage medium; the optical sensor and the storage medium are respectively connected with the processor;
the optical sensor is used for acquiring video stream data; the video stream data comprises a plurality of frames of images;
the processor is used for extracting the image characteristics of the current frame image; comparing whether the image characteristics of the current frame image are consistent with those of the previous frame image; coding the current frame image to obtain coded data; and if the image characteristics of the current frame image are not consistent with those of the previous frame image, storing the coded data corresponding to the current frame image into the storage medium.
8. The system of claim 7, wherein the processor comprises a deep learning network controller and an encoder connected in parallel;
the deep learning network controller is used for comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image or not; if the image characteristics of the current frame image are inconsistent with those of the previous frame image, generating a storage instruction through the deep learning network controller; sending the store instruction to the encoder;
and the encoder is used for storing the coded data of the current frame image to a storage medium according to the storage instruction.
9. The system of claim 7, wherein the processor comprises a serially connected deep learning network controller and encoder;
the deep learning network controller is used for comparing whether the image characteristics of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image characteristics of the previous frame image or not;
the deep learning network controller is further used for generating a storage instruction; sending the store instruction to the encoder;
And the encoder is also used for encoding the current frame image according to the storage instruction to obtain encoded data.
10. The system of claim 8 or 9, further comprising a memory coupled to the processor, the memory storing a deep learning network model for loading into the deep learning network controller for execution; the deep learning network model is obtained by training the deep learning network model based on a sample image set constructed according to multiple frames of sample images in the obtained sample video stream data.
11. The system according to claim 8 or 9, wherein the deep learning network controller is further configured to generate a video feature stream corresponding to the video stream data based on the image features corresponding to each frame of image; storing the video feature stream to the storage medium.
12. The system according to claim 8 or 9, wherein the encoder is further configured to delete the encoded data corresponding to the current frame image when the image characteristics of the current frame image are consistent with those of the previous frame image; and storing the image characteristics of the current frame image to the storage medium.
13. A video stream data storage apparatus, characterized in that the apparatus comprises:
the video stream data acquisition module is used for acquiring video stream data; the video stream data comprises a plurality of frames of images;
the image characteristic extraction module is used for extracting the image characteristics of the current frame image through the processor;
the image characteristic comparison module is used for comparing whether the image characteristic of the current frame image is consistent with the image characteristic of the previous frame image;
the image coding module is used for coding the current frame image through the processor to obtain coded data;
and the image storage module is used for storing the coded data corresponding to the current frame image to a storage medium if the image characteristics of the current frame image are inconsistent with the image characteristics of the previous frame image.
14. The apparatus of claim 13, wherein the image feature comparison module is further configured to compare whether the image features of the current frame image extracted by the deep learning network model on the deep learning network controller are consistent with the image features of the previous frame image; the image storage module is also used for generating a storage instruction through the deep learning network controller; and sending the storage instruction to the encoder to instruct the encoder to store the encoded data of the current frame image to a storage medium.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201910340860.2A 2019-04-25 2019-04-25 Video stream data storage method, device, system and storage medium Pending CN111866443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910340860.2A CN111866443A (en) 2019-04-25 2019-04-25 Video stream data storage method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910340860.2A CN111866443A (en) 2019-04-25 2019-04-25 Video stream data storage method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN111866443A true CN111866443A (en) 2020-10-30

Family

ID=72951617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910340860.2A Pending CN111866443A (en) 2019-04-25 2019-04-25 Video stream data storage method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111866443A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004120417A (en) * 2002-09-26 2004-04-15 Canon Inc Moving picture compressing device and method for the same
US20080310510A1 (en) * 2005-03-22 2008-12-18 Mitsubishi Electric Corporation Image Coding, Recording and Reading Apparatus
US20090059002A1 (en) * 2007-08-29 2009-03-05 Kim Kwang Baek Method and apparatus for processing video frame
CN103327306A (en) * 2013-06-14 2013-09-25 广东威创视讯科技股份有限公司 Method and device for storing video surveillance image
CN105357523A (en) * 2015-10-20 2016-02-24 苏州科技学院 High-order singular value decomposition (HOSVD) algorithm based video compression system and method
CN106686404A (en) * 2016-12-16 2017-05-17 中兴通讯股份有限公司 Video analysis platform, matching method, accurate advertisement delivery method and system
CN107330074A (en) * 2017-06-30 2017-11-07 中国科学院计算技术研究所 The image search method encoded based on deep learning and Hash
WO2017219896A1 (en) * 2016-06-21 2017-12-28 中兴通讯股份有限公司 Method and device for transmitting video stream
CN108737825A (en) * 2017-04-13 2018-11-02 腾讯科技(深圳)有限公司 Method for coding video data, device, computer equipment and storage medium
CN108847251A (en) * 2018-07-04 2018-11-20 武汉斗鱼网络科技有限公司 A kind of voice De-weight method, device, server and storage medium
CN109474799A (en) * 2018-10-26 2019-03-15 西安科锐盛创新科技有限公司 Image storage method and its system based on video monitoring
CN109543735A (en) * 2018-11-14 2019-03-29 北京工商大学 Video copying detection method and its system
CN109618227A (en) * 2018-10-26 2019-04-12 西安科锐盛创新科技有限公司 Video data storage method and its system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004120417A (en) * 2002-09-26 2004-04-15 Canon Inc Moving picture compressing device and method for the same
US20080310510A1 (en) * 2005-03-22 2008-12-18 Mitsubishi Electric Corporation Image Coding, Recording and Reading Apparatus
US20090059002A1 (en) * 2007-08-29 2009-03-05 Kim Kwang Baek Method and apparatus for processing video frame
CN103327306A (en) * 2013-06-14 2013-09-25 广东威创视讯科技股份有限公司 Method and device for storing video surveillance image
CN105357523A (en) * 2015-10-20 2016-02-24 苏州科技学院 High-order singular value decomposition (HOSVD) algorithm based video compression system and method
WO2017219896A1 (en) * 2016-06-21 2017-12-28 中兴通讯股份有限公司 Method and device for transmitting video stream
CN106686404A (en) * 2016-12-16 2017-05-17 中兴通讯股份有限公司 Video analysis platform, matching method, accurate advertisement delivery method and system
CN108737825A (en) * 2017-04-13 2018-11-02 腾讯科技(深圳)有限公司 Method for coding video data, device, computer equipment and storage medium
CN107330074A (en) * 2017-06-30 2017-11-07 中国科学院计算技术研究所 The image search method encoded based on deep learning and Hash
CN108847251A (en) * 2018-07-04 2018-11-20 武汉斗鱼网络科技有限公司 A kind of voice De-weight method, device, server and storage medium
CN109474799A (en) * 2018-10-26 2019-03-15 西安科锐盛创新科技有限公司 Image storage method and its system based on video monitoring
CN109618227A (en) * 2018-10-26 2019-04-12 西安科锐盛创新科技有限公司 Video data storage method and its system
CN109543735A (en) * 2018-11-14 2019-03-29 北京工商大学 Video copying detection method and its system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李佳: "面向视频流的MEC缓存转码联合优化研究", 《电信科学》 *

Similar Documents

Publication Publication Date Title
US10496903B2 (en) Using image analysis algorithms for providing training data to neural networks
US10896522B2 (en) Method and apparatus for compressing image
US10462476B1 (en) Devices for compression/decompression, system, chip, and electronic device
US20220060724A1 (en) Method and system for optimized delta encoding
CN108156519B (en) Image classification method, television device and computer-readable storage medium
US20170264902A1 (en) System and method for video processing based on quantization parameter
EP3583777A1 (en) A method and technical equipment for video processing
KR20190130479A (en) System for compressing and restoring picture based on AI
CN108399052B (en) Picture compression method and device, computer equipment and storage medium
KR102299958B1 (en) Systems and methods for image compression at multiple, different bitrates
WO2022028197A1 (en) Image processing method and device thereof
Chakraborty et al. MAGIC: Machine-learning-guided image compression for vision applications in Internet of Things
CN110766048A (en) Image content identification method and device, computer equipment and storage medium
CN111314707B (en) Data mapping identification method, device and equipment and readable storage medium
RU2646348C2 (en) Method of compression of image vector
CN111866443A (en) Video stream data storage method, device, system and storage medium
CN113507611B (en) Image storage method and device, computer equipment and storage medium
CN110545402A (en) underground monitoring video processing method, computer equipment and storage medium
US20230028426A1 (en) Method and system for optimizing image and video compression for machine vision
CN114501031B (en) Compression coding and decompression method and device
WO2022022176A1 (en) Image processing method and related device
US11350134B2 (en) Encoding apparatus, image interpolating apparatus and encoding program
CN115294429A (en) Feature domain network training method and device
CN111243046A (en) Image quality detection method, device, electronic equipment and storage medium
CN111144241A (en) Target identification method and device based on image verification and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20221206