CN112492345A - Audio and video storage method, system, terminal and computer readable storage medium - Google Patents

Audio and video storage method, system, terminal and computer readable storage medium Download PDF

Info

Publication number
CN112492345A
CN112492345A CN202011351568.XA CN202011351568A CN112492345A CN 112492345 A CN112492345 A CN 112492345A CN 202011351568 A CN202011351568 A CN 202011351568A CN 112492345 A CN112492345 A CN 112492345A
Authority
CN
China
Prior art keywords
audio
video
cloud
stream data
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011351568.XA
Other languages
Chinese (zh)
Other versions
CN112492345B (en
Inventor
廖佳鑫
沈远浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth Digital Technology Co Ltd
Original Assignee
Shenzhen Skyworth Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth Digital Technology Co Ltd filed Critical Shenzhen Skyworth Digital Technology Co Ltd
Priority to CN202011351568.XA priority Critical patent/CN112492345B/en
Publication of CN112492345A publication Critical patent/CN112492345A/en
Application granted granted Critical
Publication of CN112492345B publication Critical patent/CN112492345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23109Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • H04N21/2323Content retrieval operation locally within server, e.g. reading video streams from disk arrays using file mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • H04N21/440272Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA for performing aspect ratio conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses an audio and video storage method, a system, a terminal and a computer readable storage medium, wherein the audio and video storage method comprises the following steps: sending the encoding parameters of the audio and video to the cloud, and returning the encoding parameter numbers corresponding to the encoding parameters of each path of code stream by the cloud; acquiring audio and video stream data, setting a coding parameter number and timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; sending index information to the cloud end, and returning the storage position and the file name of the audio and video stream data by the cloud end based on the index information; and uploading the audio and video streaming data to a cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud. The method solves the technical problems that the existing three-in-one storage media file cannot be immediately effective when the uploading code rate is switched and the file reading and retrieval efficiency is low, realizes stream recording and improves the reading efficiency of cloud storage and the retrieval efficiency of the file.

Description

Audio and video storage method, system, terminal and computer readable storage medium
Technical Field
The present application relates to the field of streaming media technologies, and in particular, to an audio and video storage method, system, terminal, and computer-readable storage medium.
Background
Currently, a common cloud storage scheme in the market only uploads a locally stored recording file, the file may be an MP4 or TS package, each file independently has audio and video encoding parameters, audio and video encoding data, a timestamp and index data in the media file package, which is called a three-in-one storage scheme, and one file only has one video and audio encoding parameter. While most ipcameras have supported the simultaneous output of video at more than one resolution, typically only one of the resolutions is stored in the cloud. Even though some manufacturers set interfaces in IPCamera to allow users to select which code stream with the respective rate to upload, due to the limitation of MP4 or TS format encapsulation, such switching cannot be immediately effective, and can only occur at the beginning of the next file. Secondly, the data retrieved by the user is a list of media files, and when playing, the encoding parameters, the encoding data and the index data need to be independently found in each file, and when reading the files, a non-sequential skip mode is usually adopted. Therefore, the existing three-in-one storage media file causes the problems that the file cannot be immediately effective when the uploading code rate is switched and the file reading and retrieval efficiency is low.
Disclosure of Invention
The embodiment of the application aims to solve the problems that the existing three-in-one storage media file cannot be immediately effective when uploading code rate is switched and the file reading and retrieval efficiency is low due to the fact that the three-in-one storage media file is adopted.
In order to achieve the above object, an aspect of the present application provides an audio and video storage method, where the audio and video storage method includes the following steps:
sending the encoding parameters of the audio and video to a cloud end, and returning the encoding parameter numbers corresponding to the encoding parameters of each path of code stream by the cloud end;
acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information;
the index information is sent to the cloud end, and the cloud end returns the storage position and the file name of the audio and video streaming data based on the index information;
and sending the audio and video stream data to the cloud according to the storage position and the file name, and storing the audio and video stream data by the cloud.
Optionally, the step of setting the coding parameter number and the timestamp information for each frame of audio/video stream data includes:
storing the acquired audio and video stream data into a queue based on the sequence of the timestamp information;
and setting the coding parameter number and the timestamp information for each frame of audio/video stream data in the queue.
Optionally, the step of sending the index information to the cloud includes:
dividing an audio/video file to be stored into a plurality of audio/video slice files, and creating index information corresponding to the plurality of audio/video slice files respectively;
and sending index information corresponding to the plurality of audio and video slice files to the cloud.
Optionally, the step of sending the audio/video stream data to the cloud according to the storage location and the file name includes:
receiving the offset of the audio and video slice file sent by the cloud;
and determining the storage position of the audio and video stream data in the file corresponding to the cloud end according to the offset, and sending the audio and video stream data to the storage position in the file corresponding to the cloud end.
Optionally, before the step of sending the audio/video stream data to the cloud according to the storage location and the file name, the method further includes:
when the audio and video stream data is uploaded for the first time after being electrified, a zone bit is set in the index information;
and sending the index information carrying the zone bits to the cloud end, and carrying out data verification by the cloud end based on the zone bits.
Optionally, the step of sending the audio/video stream data to the cloud according to the storage location and the file name further includes:
additionally sending the audio and video stream data to an existing file of the cloud according to the storage position and the file name; alternatively, the first and second electrodes may be,
and sending the audio and video stream data to the newly built file at the cloud according to the storage position and the file name.
Optionally, before the step of sending the index information to the cloud, the method includes:
acquiring the accumulated quantity of the audio and video stream data;
and when the accumulated quantity exceeds a set threshold value, executing the step of sending the index information to the cloud.
In addition, to achieve the above object, another aspect of the present application further provides an audio/video storage system, including:
the first sending module is used for sending the coding parameters of the audio and video to a cloud end, and the cloud end returns the coding parameter numbers corresponding to the coding parameters of each path of code stream;
the acquisition module is used for acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information;
the second sending module is used for sending the index information to the cloud end, and the cloud end returns the storage position and the file name of the audio and video stream data based on the index information;
and the third sending module is used for sending the audio and video streaming data to the cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud.
In addition, in order to achieve the above object, another aspect of the present application further provides a terminal, where the terminal includes a memory, a processor, and an audio/video storage program stored in the memory and running on the processor, and the processor implements the steps of the audio/video storage method when executing the audio/video storage program.
In addition, in order to achieve the above object, another aspect of the present application further provides a computer readable storage medium, where an audio/video storage program is stored on the computer readable storage medium, and when the audio/video storage program is executed by a processor, the steps of the audio/video storage method are implemented.
In the embodiment, the encoding parameter number corresponding to the encoding parameter of each code stream is returned by the cloud by sending the encoding parameter of the audio and video to the cloud; acquiring audio and video stream data, setting a coding parameter number and timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; sending index information to the cloud end, and returning the storage position and the file name of the audio and video stream data by the cloud end based on the index information; and uploading the audio and video streaming data to a cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud. The problems that the existing three-in-one storage media file cannot be immediately effective when the uploading code rate is switched and the file reading and retrieval efficiency is low are solved, the traditional three-in-one storage is divided into three independent parts for storage, and the corresponding mapping relation is established in the file, so that the streaming recording is realized, and the reading efficiency of cloud storage and the retrieval efficiency of the file are improved.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a first embodiment of an audio/video storage method according to the present application;
fig. 3 is a schematic flowchart of a second embodiment of the audio/video storage method according to the present application;
fig. 4 is a schematic flowchart of a third embodiment of the audio/video storage method according to the present application;
fig. 5 is a schematic flow chart illustrating setting of the coding parameter number and the timestamp information for each frame of audio/video stream data in the audio/video storage method of the present application;
fig. 6 is a schematic flow chart of sending the index information to the cloud in the audio and video storage method of the present application;
fig. 7 is a schematic flowchart of a process of sending the audio/video stream data to the cloud according to the storage location and the file name in the audio/video storage method according to the present application;
fig. 8 is a schematic operation flow diagram of the audio/video storage method according to the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The main solution of the embodiment of the application is as follows: sending the encoding parameters of the audio and video to a cloud end, and returning the encoding parameter numbers corresponding to the encoding parameters of each path of code stream by the cloud end; acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; the index information is sent to the cloud end, and the cloud end returns the storage position and the file name of the audio and video streaming data based on the index information; and sending the audio and video stream data to the cloud according to the storage position and the file name, and storing the audio and video stream data by the cloud.
Because the existing three-in-one storage media file is adopted, the file cannot be immediately effective when the uploading code rate is switched, and only can occur when the next file starts. Secondly, the data retrieved by the user is a list of media files, encoding parameters, encoding data and index data are independently found in each file during playing, and a non-sequential skip mode is usually adopted during reading the files, so that the file reading and retrieving efficiency is low. The coding parameter number corresponding to the coding parameter of each code stream is returned by the cloud end by sending the coding parameter of the audio and video to the cloud end; acquiring audio and video stream data, setting a coding parameter number and timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; sending index information to the cloud end, and returning the storage position and the file name of the audio and video stream data by the cloud end based on the index information; and uploading the audio and video streaming data to a cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud. The traditional three-in-one storage is divided into three independent parts for storage, and the corresponding mapping relation is established in the file, so that the streaming recording is realized, and the reading efficiency of the cloud storage and the retrieval efficiency of the file are improved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may also include a camera, sensor, audio circuitry, detector, and the like. Of course, the terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer and a temperature sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and an audio-visual storage program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the audio and video storage program in the memory 1005 and perform the following operations:
sending the encoding parameters of the audio and video to a cloud end, and returning the encoding parameter numbers corresponding to the encoding parameters of each path of code stream by the cloud end;
acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information;
the index information is sent to the cloud end, and the cloud end returns the storage position and the file name of the audio and video streaming data based on the index information;
and sending the audio and video stream data to the cloud according to the storage position and the file name, and storing the audio and video stream data by the cloud.
Referring to fig. 2, fig. 2 is a schematic flowchart of a first embodiment of an audio/video storage method according to the present application.
While the embodiments of the present application provide an embodiment of an audio-video storage method, it should be noted that, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that shown or described here.
The audio and video storage method comprises the following steps:
step S10, sending the coding parameters of the audio and video to the cloud, and returning the coding parameter numbers corresponding to the coding parameters of each path of code stream by the cloud;
the application environment of the method provided in this embodiment is to be applied to an intelligent camera, and it should be noted that the method provided in this embodiment may also be applied to an intelligent terminal such as a smart phone or a tablet computer, and is not limited herein.
When the intelligent camera is started, the encoding parameters of all audio and video in the current intelligent camera are firstly reported to the cloud, and the encoding parameters comprise: bitrate (referring to the amount of data used by a video or audio file in a unit time), frame rate (a measure for measuring the number of frames displayed in a picture), sampling rate (the number of samples taken from a continuous signal and forming a discrete signal per second), packing format (i.e., video format such as MP4, AVI, MKV, FLV, WMA, etc.), picture scale (referring to the ratio of the width to the height of a video picture), and resolution (referring to the number of pixels in the width to the height of a video), etc. The intelligent video player comprises a plurality of intelligent cameras, a video server and a player, wherein each intelligent camera only generates and maintains a media parameter file at the cloud end, audio and video coding parameters are kept unchanged at most of time, storage capacity can be saved by sharing the audio and video coding parameters, the player is also facilitated to connect two streams after the parameters are changed, and a mode of covering recording is adopted.
When the cloud receives the audio and video coding parameters reported by the intelligent camera, reading a media parameter file prestored in the cloud by the intelligent camera, traversing all the audio and video coding parameters reported by the camera, and judging whether the coding parameters exist in the media parameter file or not; if the audio/video coding parameter does not exist, writing the audio/video coding parameter reported by the camera into a media parameter file; if the audio and video coding parameter number exists, the audio and video coding parameter number in the media parameter file is returned. Further judging whether the encoding parameters which are not traversed exist at present, if so, returning to the step of judging whether the encoding parameters exist in the media parameter file; and if the code stream does not exist, returning the code parameter number of the code parameter of each path of code stream of the intelligent camera at the cloud.
Step S20, acquiring audio/video stream data, setting the coding parameter number and the time stamp information for each frame of audio/video stream data, and creating the index information of each frame of audio/video stream data according to the time stamp information;
acquiring audio and video stream data through an intelligent camera and a microphone, setting a coding parameter number and timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; the time stamp information is the time sequence of acquiring the audio and video stream data. For example, for each frame of video in the video, the frame type and the timestamp of the current frame are acquired from the frame information of the current frame, the time offset of the current frame is determined according to the timestamp, the data of the part of the video located before the current frame is determined as the data offset of the current frame, and the frame type, the timestamp, the time offset and the data offset of the current frame are further combined into the index information of the current frame. Since the index information contained in each frame is different, the index information of each frame is independent.
Referring to fig. 5, the step of setting the coding parameter number and the timestamp information for each frame of audio/video stream data includes:
step S21, storing the acquired audio and video stream data into a queue based on the sequence of the timestamp information;
and step S22, setting the coding parameter number and the timestamp information for each frame of audio/video stream data in the queue.
The camera stores the acquired audio and video stream data into a queue based on the sequence of the time stamp information, sets a coding parameter number and time stamp information for each frame of audio and video stream data in the queue, and is used for switching code rate and playing in sequence according to the coding parameter and the time stamp when the subsequent audio and video stream is viewed; the frame is composed of different parts, usually the frame is composed of two parts of "frame head + data information", and the coding parameter number and the time stamp information are set at the position of the frame head of each frame.
Step S30, the index information is sent to the cloud end, and the cloud end returns the storage position and the file name of the audio and video stream data based on the index information;
the intelligent camera is provided with a limit value of audio and video data, when the audio and video data are accumulated to exceed the limit value or exceed the set time, index information is sent to the cloud, and the cloud returns the storage position and the file name of the audio and video stream data based on the index information. After receiving the index information, the cloud acquires contents in the index information, such as a frame type, a timestamp, a time offset, a data offset and the like, and determines a storage position and a file name of the current audio and video based on the contents, wherein the storage position is a C disk and the file name is a file A.
Referring to fig. 6, the step of sending the index information to the cloud includes:
step S31, dividing the audio/video file to be stored into a plurality of audio/video slice files, and creating index information corresponding to the plurality of audio/video slice files respectively;
and step S32, sending index information corresponding to the plurality of audio/video slice files to the cloud.
In order to accelerate the loading time before audio and video playing, the audio and video can be stored in the cloud in a slicing mode, under the condition of audio and video slicing processing, when the player plays the nth section of audio and video, the section of audio and video of N +1 can be pre-downloaded, so that playing is smoother, but the section of audio and video of N +2 cannot be downloaded by the player, so that bandwidth is saved for other users, and the pressure of a video server is reduced.
The intelligent camera divides the audio and video files to be stored into a plurality of audio and video slice files, creates index information corresponding to the plurality of audio and video slice files respectively, and sends the index information corresponding to the plurality of audio and video slice files respectively to the cloud. For example: and the intelligent camera divides the audio and video files to be stored according to a specified time interval, if the specified time interval is 8 seconds, the audio and video is divided every 8 seconds in the audio and video recording process. When the audio and video slices are divided, the index information of the audio and video slices can be set every time one audio and video slice is obtained.
The cloud end generates an uploading position and a slice file name of audio and video stream data according to the index information of the audio and video slice file, generates an index file corresponding to the slice, and writes the index information into the index file. Wherein, the index file can include: the information comprises the serial number of the camera, the file size of the audio and video file, the file name, the video recording starting time, the video recording ending time, the cloud storage address and the like. The index file is used for recording time points and key positions of key frames of streaming media audio and video, and can play a role in fast indexing when audio and video are played by equipment such as a mobile phone, a computer and the like. When the intelligent camera uploads a slice file of the monitoring video to the cloud storage server, I frame information needs to be written into a frame index file corresponding to the slice file. The cloud further judges whether the intelligent camera uploads audio and video data for the first time after being electrified, if so, the server end completes consistency check on the file related to the audio and video data of the camera uploaded to the cloud, and after the consistency check is completed, calculates the apend offset of the slice file and returns the uploading position, the slice file name and the apend offset of the audio and video data to the intelligent camera; if the audio and video stream data are not uploaded for the first time, the slice file apend offset is directly calculated, and the uploading position, the slice file name and the apend offset of the audio and video stream data are returned to the intelligent camera.
And step S40, the audio and video stream data are sent to the cloud according to the storage position and the file name, and the cloud stores the audio and video stream data.
When the intelligent camera acquires information such as a storage position, a slice file name and an ap pend offset returned by the cloud, the audio and video streaming data are uploaded to the cloud according to the acquired information and stored by the cloud. For example: and uploading the audio and video streaming data to a file A in the cloud C disk if the currently acquired storage position is the C disk and the file name is the file A. Further, after knowing the storage file name of the data, it is necessary to know the location of the data specifically stored in the file, referring to fig. 7, the step of sending the audio/video stream data to the cloud according to the storage location and the file name includes:
step S41, receiving the offset of the audio and video slice file sent by the cloud;
and step S42, determining the storage position of the audio and video stream data in the cloud corresponding file according to the offset, and sending the audio and video stream data to the storage position in the cloud corresponding file.
After receiving the offset of the audio and video slice file returned by the cloud, the intelligent camera determines the storage position of the audio and video stream data in the file corresponding to the cloud according to the offset, and sends the audio and video stream data to the storage position in the file corresponding to the cloud. The offset of the audio/video slice file refers to the number of bytes moved from a specified position forwards or backwards, the offset is used when used internal data is found from the file, the offset is generally divided into three types, one is to move a plurality of bytes backwards from the beginning of the file to find a target, the second is to move a plurality of bytes forwards from the end of the file to find the target, and the third is a relative position and is to move forwards or backwards from the current position of the file to find the target. When the intelligent camera uploads the audio and video stream data to a file, the offset needs to be known, the storage position of the audio and video stream data in the file is determined based on the offset, if a plurality of audio and video files are stored in the file A in a list form, the specific storage position of the current audio and video file to be stored is determined to be the tail end of the file list based on the offset, and the current audio and video stream data needs to be sent to the tail end of the list in the file A.
In the embodiment, the encoding parameter number corresponding to the encoding parameter of each code stream is returned by the cloud by sending the encoding parameter of the audio and video to the cloud; acquiring audio and video stream data, setting a coding parameter number and timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; when the accumulated amount of the audio and video stream data exceeds a set threshold, sending index information to the cloud end, and returning the storage position and the file name of the audio and video stream data by the cloud end based on the index information; and uploading the audio and video streaming data to a cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud. The traditional three-in-one storage is divided into three independent parts for storage, and the corresponding mapping relation is established in the file, so that a user can take effect immediately when the uploading code rate is switched, and the real stream recording is realized without waiting for the start of the next file; the sequential reading and playing of the playing end can be realized, skip reading is avoided, the cloud storage reading efficiency is improved, and the playing speed is increased; retrieval information is independent and is used for retrieval operation independently, the retrieval operation is not required to be distinguished according to video resolution, and the retrieval efficiency is improved by centralized placement, so that the purpose of enhancing user experience is indirectly achieved.
Further, referring to fig. 3, a second embodiment of the audio/video storage method of the present application is proposed.
The second embodiment of the audio/video storage method is different from the first embodiment of the audio/video storage method in that before the step of sending the audio/video stream data to the cloud according to the storage location and the file name, the method further includes:
step S43, when the audio and video stream data is uploaded for the first time after being electrified, a flag bit is set in the index information;
and step S44, sending the index information carrying the flag bit to the cloud end, and carrying out data verification by the cloud end based on the flag bit.
Because the three-in-one storage is divided into three independent parts for storage, namely the encoding parameters, the encoding data and the index data are independently stored, the three parts can be out of synchronization after being stored in a power-off state. Therefore, when the intelligent camera is powered on and audio and video stream data is uploaded for the first time, a zone bit is preset, the zone bit is the starting-up zone bit information of the current intelligent camera, the zone bit is further placed in the index information, and when the index information is sent to the cloud, the zone bit can be sent to the cloud together. When the cloud acquires the zone bit, consistency check is carried out on the recording file and the index file stored by the intelligent camera, so that consistency of the monitoring video file, the index file and the parameter map stored on the cloud is guaranteed.
In this embodiment, consistency check is performed on each file through the flag bit, so that consistency of the monitoring video file, the index file and the parameter map stored in the cloud is ensured.
Further, referring to fig. 4, a third embodiment of the audio/video storage method of the present application is proposed.
The third embodiment of the audio/video storage method is different from the first and second embodiments of the audio/video storage method in that the step of sending the audio/video stream data to the cloud according to the storage location and the file name further includes:
step S45, the audio and video stream data are additionally sent to the existing file of the cloud according to the storage position and the file name; alternatively, the first and second electrodes may be,
and step S46, sending the audio and video stream data to the newly built file in the cloud according to the storage position and the file name.
When the current audio and video streaming data are collected, the intelligent camera judges whether a storage file needs to be newly built at the cloud end or not, if the current audio and video streaming data are new, the storage file needs to be newly built at the cloud end, and the current collected audio and video streaming data are uploaded to the new storage file. If the acquired audio and video stream data is to additionally upload existing data, a storage file does not need to be newly built at the cloud end, the intelligent camera writes new stream data after the uploaded file data by calling a cloud storage stream media file additional writing interface, the existing data can be deleted after the uploading is successful, if the uploading fails, the position of the file which needs to be added currently can be taken from the cloud storage, the additional writing is carried out again, and the original stream media file or data can be deleted as long as the additional writing is successful.
When the additional writing mode is adopted, a user can acquire the data which is uploaded in advance in real time or acquire a certain section of data according to the requirement, and the new data which is additionally uploaded cannot be influenced when the data is acquired. The user can flexibly decide whether to upload a new streaming media file or to add and write on the existing streaming media file according to the rule of the user, but not a fixed file uploading mode. The additional writing mode is added according to the recording time length required by each streaming media file, and when the recording time length contained in the streaming media reaches a certain value, a new streaming media file is created again to upload new audio and video stream data.
According to the embodiment, whether the acquired audio and video stream data is new data or not is judged, so that the uploading mode and the storage file of the stream data are determined, and when the stream data is uploaded in the additional writing mode, the new stream data is directly written in after the uploaded file data, so that the uploading efficiency of the stream data is improved.
In order to better explain the scheme of the embodiment of the application, the audio and video storage method comprises the following steps:
referring to fig. 8, when the smart home camera is started, all audio and video encoding parameters of the current smart home camera are reported to the cloud end; the server side reads a media parameter file stored in the cloud end by the camera, and traverses all audio and video coding parameters reported by the camera to judge whether the coding parameters exist in the media parameter file or not; if the audio/video coding parameter does not exist, writing the audio/video coding parameter reported by the camera into a media parameter file; if the audio and video coding parameter number exists, the audio and video coding parameter number in the media parameter file is returned. Further judging whether the encoding parameters which are not traversed exist at present, if so, returning to the step of judging whether the encoding parameters exist in the media parameter file; and if the code stream does not exist, returning the code parameter number of the code stream of each path of the camera at the cloud end from the server end. The camera collects audio and video stream data, a coding parameter number and time stamp information are placed for each frame of audio and video stream data, and index information is created for I frames of audio and video stream data according to the time stamp information. Judging whether the audio and video data of the camera head end is accumulated to exceed a limit value, if not, returning to the step of executing the camera to acquire audio and video stream data; if so, dividing the audio and video files to be stored into a plurality of audio and video slice files, determining index information corresponding to the plurality of audio and video slice files respectively, and uploading the index information of the audio and video slice files to a server side. And the server side generates an uploading position and a slice file name of the audio and video stream data according to the index information of the audio and video slice file, generates an index file corresponding to the slice, and writes the index information into the index file. Further judging whether the camera uploads audio and video data for the first time after being electrified, if so, completing consistency check on the camera audio and video data related files uploaded to the cloud end by the server end; and if not, calculating the slice file apend offset. And returning the uploading position, the slice file name and the apend offset of the audio and video streaming data to the camera for uploading the audio and video streaming data. After receiving data returned by the cloud, the camera determines the storage position of the audio and video stream data in the file according to the apend offset, and uploads the audio and video stream data to the storage position of the file corresponding to the cloud in an additional uploading mode.
In the embodiment, the traditional three-in-one storage is divided into three independent parts for storage, and the corresponding mapping relation is established in the file, so that a user can take effect immediately when the uploading code rate is switched, and the real streaming recording is realized without waiting for the start of the next file; the sequential reading and playing of the playing end can be realized, skip reading is avoided, the cloud storage reading efficiency is improved, and the playing speed is increased; retrieval information is independent and is used for retrieval operation independently, the retrieval operation is not required to be distinguished according to video resolution, and the retrieval efficiency is improved by centralized placement, so that the purpose of enhancing user experience is indirectly achieved.
In addition, this application still provides an audio and video storage system, the system includes:
the first sending module is used for sending the coding parameters of the audio and video to a cloud end, and the cloud end returns the coding parameter numbers corresponding to the coding parameters of each path of code stream;
the acquisition module is used for acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information;
the second sending module is used for sending the index information to the cloud end, and the cloud end returns the storage position and the file name of the audio and video stream data based on the index information;
and the third sending module is used for sending the audio and video streaming data to the cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud.
Further, the obtaining module comprises: a storage unit and a setting unit;
the storage unit is used for storing the acquired audio and video stream data into a queue based on the sequence of the timestamp information;
the setting unit is used for setting the coding parameter number and the timestamp information for each frame of audio/video stream data in the queue.
Further, the second sending module includes: a dividing unit and a transmitting unit;
the dividing unit is used for dividing the audio and video files to be stored into a plurality of audio and video slice files and creating index information corresponding to the plurality of audio and video slice files respectively;
the sending unit is used for sending the index information corresponding to the audio and video slice files to the cloud.
Further, the third sending module includes: a receiving unit and a transmitting unit;
the receiving unit is used for receiving the offset of the audio and video slice file sent by the cloud;
the sending unit is used for determining the storage position of the audio and video streaming data in the file corresponding to the cloud end according to the offset, and sending the audio and video streaming data to the storage position in the file corresponding to the cloud end.
Further, the third sending module further includes: a setting unit and a transmitting unit;
the setting unit is used for setting a flag bit in the index information when the audio and video stream data is uploaded for the first time after being electrified;
the sending unit is used for sending the index information carrying the zone bits to the cloud end, and the cloud end conducts data verification based on the zone bits.
Further, the sending unit is further configured to additionally send the audio/video stream data to an existing file in the cloud according to the storage location and the file name; alternatively, the first and second electrodes may be,
and the sending unit is further used for sending the audio and video stream data to the newly built file in the cloud according to the storage position and the file name.
Further, the second sending module further includes: an acquisition unit and a judgment unit;
the acquisition unit is used for acquiring the accumulated quantity of the audio and video stream data;
and the judging unit is used for judging that the step of sending the index information to the cloud end is executed when the accumulated quantity exceeds a set threshold value.
The implementation of the functions of each module of the audio/video storage system is similar to the process in the embodiment of the method, and is not repeated here.
In addition, the application also provides a terminal, the terminal comprises a memory, a processor and an audio and video storage program which is stored in the memory and runs on the processor, the terminal sends the encoding parameters of the audio and video to the cloud, and the cloud returns the encoding parameter numbers corresponding to the encoding parameters of each code stream; acquiring audio and video stream data, setting a coding parameter number and timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information; when the accumulated amount of the audio and video stream data exceeds a set threshold, sending index information to the cloud end, and returning the storage position and the file name of the audio and video stream data by the cloud end based on the index information; and uploading the audio and video streaming data to a cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud. The traditional three-in-one storage is divided into three independent parts for storage, and the corresponding mapping relation is established in the file, so that the streaming recording is realized, and the reading efficiency of the cloud storage and the retrieval efficiency of the file are improved.
In addition, the present application also provides a computer readable storage medium, where an audio and video storage program is stored on the computer readable storage medium, and when the audio and video storage program is executed by a processor, the steps of the audio and video storage method are implemented.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While alternative embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including alternative embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An audio-video storage method, characterized in that the method comprises:
sending the encoding parameters of the audio and video to a cloud end, and returning the encoding parameter numbers corresponding to the encoding parameters of each path of code stream by the cloud end;
acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information;
the index information is sent to the cloud end, and the cloud end returns the storage position and the file name of the audio and video streaming data based on the index information;
and sending the audio and video stream data to the cloud according to the storage position and the file name, and storing the audio and video stream data by the cloud.
2. The audio-video storage method according to claim 1, wherein the step of setting the coding parameter number and the time stamp information for each frame of audio-video stream data comprises:
storing the acquired audio and video stream data into a queue based on the sequence of the timestamp information;
and setting the coding parameter number and the timestamp information for each frame of audio/video stream data in the queue.
3. The audio/video storage method according to claim 1, wherein the step of sending the index information to the cloud includes:
dividing an audio/video file to be stored into a plurality of audio/video slice files, and creating index information corresponding to the plurality of audio/video slice files respectively;
and sending index information corresponding to the plurality of audio and video slice files to the cloud.
4. The audio/video storage method according to any one of claims 1 to 3, wherein the step of sending the audio/video stream data to the cloud according to the storage location and the file name comprises:
receiving the offset of the audio and video slice file sent by the cloud;
and determining the storage position of the audio and video stream data in the file corresponding to the cloud end according to the offset, and sending the audio and video stream data to the storage position in the file corresponding to the cloud end.
5. The audio/video storage method according to any one of claims 1 to 3, wherein before the step of sending the audio/video stream data to the cloud according to the storage location and the file name, the method further includes:
when the audio and video stream data is uploaded for the first time after being electrified, a zone bit is set in the index information;
and sending the index information carrying the zone bits to the cloud end, and carrying out data verification by the cloud end based on the zone bits.
6. The audio/video storage method according to any one of claims 1 to 3, wherein the step of sending the audio/video stream data to the cloud according to the storage location and the file name further includes:
additionally sending the audio and video stream data to an existing file of the cloud according to the storage position and the file name; alternatively, the first and second electrodes may be,
and sending the audio and video stream data to the newly built file at the cloud according to the storage position and the file name.
7. The audio-video storage method according to any one of claims 1 to 3, wherein before the step of sending the index information to the cloud, the method comprises:
acquiring the accumulated quantity of the audio and video stream data;
and when the accumulated quantity exceeds a set threshold value, executing the step of sending the index information to the cloud.
8. An audio-video storage system, the system comprising:
the first sending module is used for sending the coding parameters of the audio and video to a cloud end, and the cloud end returns the coding parameter numbers corresponding to the coding parameters of each path of code stream;
the acquisition module is used for acquiring audio and video stream data, setting the coding parameter number and the timestamp information for each frame of audio and video stream data, and creating index information of each frame of audio and video stream data according to the timestamp information;
the second sending module is used for sending the index information to the cloud end, and the cloud end returns the storage position and the file name of the audio and video stream data based on the index information;
and the third sending module is used for sending the audio and video streaming data to the cloud according to the storage position and the file name, and storing the audio and video streaming data by the cloud.
9. A terminal, characterized in that the terminal comprises a memory, a processor and an audio-video memory program stored on the memory and run on the processor, the processor implementing the steps of the method according to any one of claims 1 to 7 when executing the audio-video memory program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an audiovisual storage program which, when executed by a processor, implements the steps of the method according to any of claims 1 to 7.
CN202011351568.XA 2020-11-25 2020-11-25 Audio and video storage method, system, terminal and computer readable storage medium Active CN112492345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011351568.XA CN112492345B (en) 2020-11-25 2020-11-25 Audio and video storage method, system, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011351568.XA CN112492345B (en) 2020-11-25 2020-11-25 Audio and video storage method, system, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112492345A true CN112492345A (en) 2021-03-12
CN112492345B CN112492345B (en) 2023-03-24

Family

ID=74935567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011351568.XA Active CN112492345B (en) 2020-11-25 2020-11-25 Audio and video storage method, system, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112492345B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302180A (en) * 2021-12-31 2022-04-08 深圳市创维软件有限公司 Video single-frame playing method, device, server, system and storage medium
CN114449001A (en) * 2021-12-30 2022-05-06 天翼云科技有限公司 Cloud storage implementation method, device, equipment and storage medium of streaming media data
CN116578741A (en) * 2023-07-12 2023-08-11 南京奥看信息科技有限公司 View hybrid storage method, device and system
CN117156172A (en) * 2023-10-30 2023-12-01 江西云眼视界科技股份有限公司 Video slice reporting method, system, storage medium and computer

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106850710A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 A kind of safe data cloud storage system, client terminal, storage server and application process
CN107979621A (en) * 2016-10-24 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of storage of video file, positioning playing method and device
CN109189724A (en) * 2018-07-18 2019-01-11 北京世纪东方通讯设备有限公司 Improve the method and device of video monitoring system audio, video data storage efficiency
CN111327896A (en) * 2018-12-13 2020-06-23 浙江宇视科技有限公司 Video transmission method and device, electronic equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106850710A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 A kind of safe data cloud storage system, client terminal, storage server and application process
CN107979621A (en) * 2016-10-24 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of storage of video file, positioning playing method and device
CN109189724A (en) * 2018-07-18 2019-01-11 北京世纪东方通讯设备有限公司 Improve the method and device of video monitoring system audio, video data storage efficiency
CN111327896A (en) * 2018-12-13 2020-06-23 浙江宇视科技有限公司 Video transmission method and device, electronic equipment and readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449001A (en) * 2021-12-30 2022-05-06 天翼云科技有限公司 Cloud storage implementation method, device, equipment and storage medium of streaming media data
CN114302180A (en) * 2021-12-31 2022-04-08 深圳市创维软件有限公司 Video single-frame playing method, device, server, system and storage medium
CN114302180B (en) * 2021-12-31 2024-02-06 深圳市创维软件有限公司 Video single-frame playing method, device, server, system and storage medium
CN116578741A (en) * 2023-07-12 2023-08-11 南京奥看信息科技有限公司 View hybrid storage method, device and system
CN116578741B (en) * 2023-07-12 2023-10-20 南京奥看信息科技有限公司 View hybrid storage method, device and system
CN117156172A (en) * 2023-10-30 2023-12-01 江西云眼视界科技股份有限公司 Video slice reporting method, system, storage medium and computer
CN117156172B (en) * 2023-10-30 2024-01-16 江西云眼视界科技股份有限公司 Video slice reporting method, system, storage medium and computer

Also Published As

Publication number Publication date
CN112492345B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN112492345B (en) Audio and video storage method, system, terminal and computer readable storage medium
CN108924582B (en) Video recording method, computer readable storage medium and recording and playing system
KR100592750B1 (en) System and method of processing mpeg streams for file index insertion
CN110784750B (en) Video playing method and device and computer equipment
US10453493B2 (en) Method and apparatus for recording and replaying video of terminal
WO2010085470A1 (en) System and method for splicing media files
CN108600784A (en) To the method, apparatus and storage medium of client device streaming media content
TW201644269A (en) Content reproduction system, content reproduction apparatus, program, content reproduction method, and providing content server
JP7431329B2 (en) Video processing methods, apparatus, computer devices and computer programs
US20130232233A1 (en) Systems and methods for client-side media chunking
KR20160088803A (en) Video playback control program, video playback control method, video delivery server, transmission program and transmission apparatus
CN113852824A (en) Video transcoding method and device, electronic equipment and storage medium
CN107105350B (en) Information apparatus and delivery apparatus
US20240015363A1 (en) Systems and methods for streaming video edits
KR101742420B1 (en) Video data file generation program, video data file generation method and video data file generation apparatus
CN106027930B (en) Recording apparatus and control method of recording apparatus
CN109565562A (en) Multicamera system, camera, the processing method of camera, confirmation device and the processing method for confirming device
US10171847B2 (en) Information device and distribution device
US20170070565A1 (en) Information device
CN115426501A (en) Audio and video code stream time calibration method and electronic equipment
CN112866745B (en) Streaming video data processing method, device, computer equipment and storage medium
KR101452269B1 (en) Content Virtual Segmentation Method, and Method and System for Providing Streaming Service Using the Same
CN103974087A (en) Video image file compressing system, client and method
CN109600571B (en) Multimedia resource transmission test system and multimedia resource transmission test method
CN113709574A (en) Video screenshot method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant