CN112823529B - Video decoding method, device, electronic equipment and computer readable storage medium - Google Patents

Video decoding method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112823529B
CN112823529B CN201880098495.2A CN201880098495A CN112823529B CN 112823529 B CN112823529 B CN 112823529B CN 201880098495 A CN201880098495 A CN 201880098495A CN 112823529 B CN112823529 B CN 112823529B
Authority
CN
China
Prior art keywords
video
audio
stream
analyzer
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880098495.2A
Other languages
Chinese (zh)
Other versions
CN112823529A (en
Inventor
胡小朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN112823529A publication Critical patent/CN112823529A/en
Application granted granted Critical
Publication of CN112823529B publication Critical patent/CN112823529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video decoding method, comprising: performing first analysis processing on the acquired video file through a first analyzer; when the first analysis processing error reporting is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process; acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file; acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.

Description

Video decoding method, device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video decoding method, apparatus, electronic device, and computer readable storage medium.
Background
The electronic device can display pictures, play music, play videos and the like. Generally, when generating a video file, different encoding modes are adopted, so that video files with different formats can be generated. Before playing the encoded video file, the video file needs to be decoded. Therefore, when video files with different formats are played, the adopted decoding modes are different.
Disclosure of Invention
The embodiment of the application provides a video decoding method, a video decoding device, electronic equipment and a computer readable storage medium.
A video decoding method, comprising:
performing first analysis processing on the acquired video file through a first analyzer;
when the first analysis processing error reporting is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
And decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.
A video decoding device, comprising:
the first analysis module is used for carrying out first analysis processing on the acquired video file through the first analyzer;
the error reporting module is used for acquiring first analysis data and error reporting data obtained in the first analysis processing process when the error reporting of the first analysis processing is detected;
the second analysis module is used for acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
the distribution processing module is used for acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data;
and the decoding processing module is used for decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.
An electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
Performing first analysis processing on the acquired video file through a first analyzer;
when the first analysis processing error reporting is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
And decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
performing first analysis processing on the acquired video file through a first analyzer;
when the first analysis processing error reporting is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
Acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
And decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a video decoding method in one embodiment;
FIG. 2 is a flow chart of a video decoding method in another embodiment;
FIG. 3 is a flow chart of a video decoding method in yet another embodiment;
FIG. 4 is a flow diagram of a process for parsing a video file in one embodiment;
FIG. 5 is a flow diagram of a process for decoding a video file in one embodiment;
FIG. 6 is a schematic diagram of a video decoding apparatus according to an embodiment;
fig. 7 is a block diagram of a part of a structure of a mobile phone related to an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first resolver may be referred to as a second resolver, and similarly, a second resolver may be referred to as a first resolver, without departing from the scope of the present application. Both the first resolver and the second resolver are resolvers, but they are not the same resolver.
Fig. 1 is a flow chart of a video decoding method in one embodiment. As shown in fig. 1, the video decoding method includes steps 102 to 110. Wherein:
step 102, performing a first parsing process on the acquired video file through a first parser.
In one embodiment, the electronic device may encode the video file in a variety of encoding modes, and the file formats of the encoded video file in different encoding modes are different. For example, common video coding formats are WMV (Windows Media Video, media video format), MPEG (Moving Pictures Experts Group, motion picture expert group), FFMPEG (Fast Forward Moving Pictures Experts Group, fast forward motion picture expert group).
Specifically, the video file contains an audio stream and a video stream, and the main function of encoding is to compress video pixel data and audio data into a video code stream, thereby reducing the data volume of the video file. The electronic device may acquire a video file stored locally, or may receive a video file sent by another electronic device. For example, a user may start a third party application in the terminal, initiate an acquisition instruction of the video file through the third party application, and then the terminal may initiate an acquisition request for acquiring the video file to the server according to the acquisition instruction, and after receiving the acquisition request, the server sends the corresponding video file to the terminal.
After the electronic device acquires the video file, the electronic device may perform a first parsing process on the video file through a first parser. The parsing process refers to a process of acquiring attribute information of a video file, for example, acquiring attribute information of a play duration of a video stream, a size of a video image, an audio format of an audio stream, and the like in the video file through the parsing process. The electronic device may analyze the video file by using an integrated analyzer, or may call an API (Application Programming Interface ) of the third party analysis platform to analyze the video file, which is not limited herein.
Step 104, when the error reporting of the first analysis processing is detected, acquiring the first analysis data and the error reporting data obtained in the first analysis processing process.
The parsing modes of the video files in different formats are different, so when the first parser performs the first parsing process on the acquired video file, if the first parser does not support the parsing process of the video file, an error is reported in the process of the first parsing process. When the video file is parsed by the first parser, first parsed data representing part of the attributes of the video file can be obtained, and error-reporting data can be obtained.
The reason for the error of the first parsing process can be obtained through the error reporting data, for example, the error reporting data can be expressed as "the first parser cannot parse the video stream with the Codec ID (Codec Identification, codec ID) of v_ms/VFW/FOURC", and it can be known from the error reporting data that the parsing is only due to the format problem of the video stream.
And 106, acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file.
When the first analysis processing is performed according to the first analyzer to report errors, the reason of the first analysis processing errors can be obtained according to the error reporting data, so that the second analyzer is obtained according to the error reporting data. And then, carrying out second analysis processing on the video file according to the acquired second analyzer to obtain second analysis data.
For example, the first parser cannot parse the audio stream in the video file based on the error data determination is due to the audio format problem. The electronic device may then obtain a second parser that parses the audio stream in the audio format, and parse the audio stream.
And step 108, acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data.
In the embodiment provided in the application, the related attribute of the video file may be obtained according to the obtained first analysis data and second analysis data, where the first analysis data and the second analysis data may respectively represent different attributes of the video file, or may represent the same attribute, and are not limited herein. Specifically, after the electronic device obtains the first analysis data and the second analysis data, the electronic device may call a decoder of a video file, and decode the video file to obtain an audio stream and a video stream in the video file.
And 110, decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain a decoded audio stream and a decoded video stream.
In one embodiment, after the audio stream and the video stream are obtained, the audio stream and the video stream may be decoded by the target decoder corresponding to the second parser, so that the obtained decoded audio stream and video stream may be played in the electronic device. Specifically, when the electronic device acquires the second parser according to the error-reporting data, the electronic device may acquire the parameters of the target decoder at the same time. Thus, when the video stream and the audio stream are decoded, the corresponding target decoder can be directly called through the acquired parameters to carry out decoding processing.
According to the video decoding method provided by the embodiment, after the video file is acquired, first parsing processing is performed on the video file through the first parser. When the first analysis processing reports errors, first analysis data and error reporting data are obtained in the first analysis processing process. And then acquiring a second analyzer according to the error reporting data, and processing the video file through the second analyzer to acquire second analysis data. And finally, acquiring an audio stream and a video stream in the video file according to the obtained first analysis data and second analysis data, and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer. In this way, when the first parsing process is in error, the video file can be parsed by acquiring the parser capable of correctly parsing the video file, and the video file is decoded according to the decoder corresponding to the acquired parser, so that the accuracy of decoding the video file is improved.
Fig. 2 is a flowchart of a video decoding method in another embodiment. As shown in fig. 2, the video decoding method includes steps 202 to 218. Wherein:
step 202, obtaining a video file, and obtaining a file format of the video file.
In one embodiment, the electronic device may read the video file from the locally stored folder, and may also receive the video file sent by other electronic devices, which is not limited herein. For example, when the electronic device detects the play command, a file identifier corresponding to the play command is obtained, and then a video file corresponding to the file identifier is obtained. After the video file is acquired, the electronic device may acquire a file format of the video file.
Taking Android (Android) system as an example, mediaExtractor API bytes of the video file may be called, after the bytes at the beginning of the video file are read, a mimeope (Multipurpose Internet Mail Extensions Type, multipurpose internet mail extension type) of the video file may be obtained according to the read bytes, and the file format of the video file may be determined according to the read mimeope.
And 204, when the file format is the preset file format, performing a first analysis process on the video file through a first analyzer corresponding to the preset file format.
Specifically, the parser for parsing a video file corresponds to a file format of the video file, and when the file format of the read video file is a preset file format, the electronic device may call a first parser corresponding to the preset file format to perform a first parsing process on the video file. When the file format of the read video file is not the preset file format, a video analyzer corresponding to other file formats can be called to analyze the video file.
For example, when MIMETYPE of the video file is read as "video/x-matroska", it is determined that the video file is in MKV (Multimedia Container ) format. When the video file format is judged to be the MKV format, a MatroskaExactor analyzer can be created to perform first analysis processing on the video file.
And 206, when the error reporting of the first analysis processing is detected, acquiring the first analysis data and the error reporting data obtained in the first analysis processing process.
Step 208, obtaining the audio format and the video format corresponding to the video file according to the error reporting data.
The audio format corresponding to the video file represents the format of the audio stream in the video file, and the video format represents the format of the video stream in the video file. When the first analysis processing procedure reports errors, the system of the electronic equipment is not supported to analyze the video file in the file format. And obtaining the audio format and the video format corresponding to the video file according to the error data.
Step 210, obtaining a second audio parser according to the audio format, and obtaining a second video parser according to the video format.
The second resolvers acquired in the electronic equipment comprise second audio resolvers and second video resolvers, the corresponding second audio resolvers can be acquired according to the audio formats, and the corresponding second audio resolvers can be acquired according to the video formats.
And step 212, performing a second parsing process on the video file through the second audio parser and the second video parser to obtain second parsed data.
After the second audio analyzer and the second video analyzer are obtained, the second analysis processing can be performed on the video file through the second audio analyzer and the second video analyzer, so that second analysis data can be obtained. Specifically, the second audio analyzer and the second video analyzer may be added code blocks for performing a second analysis process on the video file, the attribute information corresponding to the audio stream may be obtained by analyzing the video file through the second audio analyzer, and the attribute information corresponding to the video stream may be obtained by analyzing the video file through the second video analyzer. And obtaining second analysis data according to the attribute information corresponding to the audio stream and the attribute information corresponding to the video stream.
Step 214, obtaining the audio stream and the video stream in the video file according to the first parsing data and the second parsing data.
And step 216, performing audio decoding processing on the audio stream according to the target audio decoder corresponding to the second audio analyzer, and obtaining a decoded audio stream.
After the audio stream and the video stream are obtained, the audio stream and the video stream can be respectively decoded according to a target decoder corresponding to the second parser, and the decoded audio stream and video stream are obtained. The target decoder comprises a target audio decoder and a target video decoder, and performs audio decoding processing on the audio stream according to the target audio decoder to obtain a decoded audio stream. And performing video decoding processing on the video stream according to the target video decoder to obtain a decoded video stream.
Specifically, before the audio stream is decoded, audio decoding parameters corresponding to the audio stream may be obtained. And then calling a corresponding target audio decoder according to the acquired audio decoding parameters, and performing audio decoding processing on the audio stream.
And step 218, performing video decoding processing on the video stream according to the target video decoder corresponding to the second video parser, thereby obtaining a decoded video stream.
Before the video stream is decoded, the video decoding parameters corresponding to the video stream may be acquired. And then, calling a corresponding target video decoder according to the acquired video decoding parameters, and performing video decoding processing on the video stream.
In one embodiment, a method for decoding a video stream may specifically include: acquiring a first target video decoder corresponding to the second video analyzer according to a first configuration mode, and performing video decoding processing on the video stream according to the first target video decoder to obtain a decoded video stream; or, obtaining a second target video decoder corresponding to the second video analyzer according to the second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
For example, the first video decoder may be an MPEG4 video decoder and the second video decoder may be an FFMPEG video decoder. The video parameters configuring the MPEG4 video decoder and the FFMPEG video decoder are different, and the interfaces called are different, so that the ways of acquiring the MPEG4 video decoder and the FFMPEG video decoder are also different.
The decoded audio stream and the decoded video stream are corresponding, and the electronic device can read the decoded audio stream and the decoded video stream at the same time and output the decoded audio stream and the decoded video stream.
As shown in fig. 3, in an embodiment provided in the present application, the decoding process of the audio stream may specifically further include:
and step 302, when the audio format is the first audio format, performing audio decoding processing on the audio stream according to a first target audio decoder corresponding to the second audio parser, so as to obtain a decoded audio stream.
When audio decoding is performed on an audio stream, different audio decoders are acquired according to an audio format. When the audio format is the first audio format, acquiring a first target audio decoder to perform audio decoding processing on the audio stream, and obtaining a decoded audio stream.
And step 304, when the audio format is the second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio parser, so as to obtain a decoded audio stream.
And when the audio format is the second audio format, acquiring a second target audio decoder to perform audio decoding processing on the audio stream to obtain a decoded audio stream. It will be appreciated that the first and second target audio decoders obtained are decoders that support decoding processing of the audio streams in the first and second audio formats, respectively.
According to the video decoding method provided by the embodiment, when the file format of the obtained video file is the preset file format, first parsing processing is performed on the video file through the first parser. When the first analysis processing reports errors, first analysis data and error reporting data are obtained in the first analysis processing process. And then configuring a second parser according to the error reporting data, and processing the video file through the second parser to obtain second parsed data. And finally, acquiring an audio stream and a video stream in the video file according to the obtained first analysis data and second analysis data, and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer. In this way, when the first parsing process is in error, the video file can be parsed by configuring the parser capable of correctly parsing the video file, and the video file is decoded according to the decoder corresponding to the configured parser, so that the accuracy of decoding the video file is improved.
FIG. 4 is a flow diagram of a process for parsing a video file in one embodiment. As shown in fig. 4, the process of parsing a video file may specifically include:
after the electronic device obtains the video file, a mediaextender parser may be invoked, step 402.
And step 404, analyzing the video file through the called MediaExtractor analyzer to obtain MIMETYPE of the video file.
Step 406, determining whether MIMETYPE of the video file is "video/x-matroska", when MIMETYPE of the video file is "video/x-matroska", determining that the file format of the video file is a preset file format, i.e. MKV (Multimedia Container ) format, and executing step 408; when the MIMETYPE of the video file is not "video/x-matroska", it is determined that the file format of the video file is not the preset file format, step 438 is performed.
Step 408, call a matroskaextract parser (first parser) to perform a first parsing process on the video file in the MKV format.
Step 410, detecting whether the first analysis process reports errors, when the first analysis process reports errors, determining that the system does not support the analysis of the video file in the MKV format, acquiring first analysis data and error reporting data, and executing step 412; when the first parsing process does not report errors, it is determined that the system supports parsing of the video file in the MKV format, step 438 is performed.
And step 412, enhancing the MatroskaExactor by adding a second analyzer, and performing a second analysis process by the enhanced MatroskaExactor to obtain second analysis data.
Specifically, after parsing, decoder parameters may be configured through steps 414 through 434. The electronic device can then invoke the corresponding decoder according to the parameters of the configured decoder during the decoding process. It will be appreciated that when the decoder parameters are configured, the configured parameters may be written into the video file, and a correspondence relationship may be established between the audio stream and the video stream. In this way, in the decoding process, after the electronic device reads the audio stream and the video stream, the electronic device can read the decoder parameters corresponding to the configured audio stream and video stream, and then call the corresponding decoder to perform decoding processing according to the read decoder parameters.
Step 414, obtaining the audio format of the audio stream in the video file according to the error-reporting data, when the audio stream "Codec ID is a_aac" and the encoding profile is not "1", determining that the audio stream is the first audio format, and executing step 416; when the audio stream "Codec ID is a_aac" and the encoding profile is "1", it is determined that the audio stream is in the second audio format, step 420 is performed.
In step 416, first audio decoding parameters are configured.
Specifically, the first target audio decoder may be an audio decoder of a system platform. The step of configuring the first audio decoding parameters may specifically include: the MIMETYPE of the first target audio decoder is configured as "audio/mp4a-latm", and an audio ESDS field is added to the ACC audio decoder described above.
It will be appreciated that when the audio format of the audio stream is the first audio format, the first audio decoding parameters are configured in step 416, and the first target audio decoder may be called for audio decoding according to the configured first audio decoding parameters.
At step 418, the second audio decoding parameters are configured.
In one embodiment, the second target audio decoder may be an audio decoder of an FFMPEG system, and the step of configuring the second audio decoding parameters may specifically include: the MIMETYPE of the second target audio decoder is configured as "audio/ffmpeg" and the Codec ID is configured as "0x15002".
Specifically, when the audio format of the audio stream is the second audio format, the second audio decoding parameters are configured in step 418, and then the second target audio decoder may be called to perform the audio decoding process according to the configured second audio decoding parameters.
Step 420, obtaining the video format of the video stream in the video file as "v_ms/VFW/FOURCC" according to the error-prone data.
Step 422, judging whether to decode the video stream by using the first target video decoder according to a predefined selection strategy, if yes, executing step 424; if not, go to step 426.
The predefined selection policy may be a selection policy input by a user, or may be a policy automatically selected by the electronic device, which is not limited herein. For example, the electronic device defaults to using a first target video decoder, and when the remaining power of the electronic device is below a power threshold, uses a second target video decoder to perform video decoding processing on the video stream.
In step 424, the first video decoding parameters are configured.
The first target video decoder may be an MPEG4 video decoder native to the electronic device system, and the step of configuring the first video decoding parameter may specifically include: the MIMETYPE of the first target video decoder is configured as "video/mp 4 v-es" and the video ESDS field is added.
At step 426, second video decoding parameters are configured.
The second target video decoder may be an FFMPEG video decoder, and the step of configuring the second video decoding parameter may specifically include: the MIMETYPE of the second target video decoder is configured to be "video/ffmpeg" and the Codec ID is "0xd".
Step 428, when the video file is not an MKV video file, creating other video parsers from the file format of the video file.
And step 430, analyzing the video file according to the created other video analyzers.
FIG. 5 is a flow diagram of a process for decoding a video file in one embodiment. As shown in fig. 5, the process of decoding a video file may specifically include:
step 502, calling a MediaCodec decoder to acquire an audio stream and a video stream in the video file according to the first parsing data and the second parsing data.
Step 504, after the audio stream in the video file is obtained, it is determined whether the audio decoding parameter corresponding to the read audio stream is the first audio decoding parameter.
Specifically, it is determined whether MIMETYPE in the read audio decoding parameters is "audio/ffmpeg" and whether the encoding profile is "1", if not, step 506 is executed; if yes, executing the steps.
In step 506, the first audio decoding parameter is read, that is, the MIMETYPE corresponding to the read audio stream is "audio/mp4 a-lamp", and the audio ESDS field is read.
Step 508, calling the first target audio decoder according to the first audio decoding parameter, and performing audio decoding processing on the audio stream according to the first target audio decoder.
Specifically, when MIMETYPE in the first audio decoding parameter is "audio/mp4a-latm", the first target audio decoder native to the electronic device system may be called according to the first audio decoding parameter to perform audio decoding processing on the audio stream.
In step 510, the second audio decoding parameter is read, that is, the MIMETYPE corresponding to the read audio stream is "audio/ffmpeg", and the Codec ID is "0x15002".
Step 512, invoking a second target audio decoder according to the second audio decoding parameters, and performing audio decoding processing on the audio stream according to the second target audio decoder.
Specifically, when MIMETYPE in the second audio decoding parameter is "audio/FFMPEG" and the Codec ID is "0x15002", the audio decoder of the FFMPEG system may be called as the second target audio decoder according to the second audio decoding parameter, and the audio stream may be subjected to audio decoding processing according to the called second target audio decoder.
Step 514, judging whether to decode the video stream by using the first target video decoder according to a predefined selection strategy, if yes, executing step 516; if not, go to step 520.
In step 516, the first video decoding parameter is read, that is, the MIMETYPE corresponding to the video stream is "video/mp 4 v-es" and the video ESDS field is read.
Step 518, invoking the first target video decoder according to the first video decoding parameter, and performing video decoding processing on the video stream according to the first target video decoder.
Specifically, when MIMETYPE in the first video decoding parameter is "video/mp 4 v-es", a video decoder in the electronic device system native MPEG4 system may be called as a first target video decoder according to the first video decoding parameter, and video decoding processing may be performed on the video stream according to the called first target video decoder.
In step 520, the second video decoding parameter is read, that is, MIMETYPE corresponding to the read video stream is "video/ffmpeg", and Codec ID is "0xd".
Step 522, invoking the second target video decoder according to the second video decoding parameter, and performing video decoding processing on the video stream according to the second target video decoder.
Specifically, MIMETYPE in the second video decoding parameter is read as "video/FFMPEG", codec ID is "0xd", a video decoder in the FFMPEG system is called as a second target video decoder according to the second video decoding parameter, and video decoding processing is performed on the video stream according to the called second target video decoder.
It should be understood that, although the steps in the flowchart of fig. 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 5 may include a plurality of sub-steps or phases, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or phases is not necessarily sequential, but may be performed in turn or alternately with at least some of the sub-steps or phases of other steps or steps.
Fig. 6 is a schematic structural diagram of a video decoding apparatus in one embodiment. As shown in fig. 6, the video decoding apparatus 600 includes a first parsing module 602, an error reporting module 604, a second parsing module 606, a shunting processing module 608, and a decoding processing module 610. Wherein:
the first parsing module 602 is configured to perform a first parsing process on the acquired video file through a first parser.
And the error reporting module 604 is configured to obtain first analysis data and error reporting data obtained in the first analysis processing procedure when an error reporting is detected in the first analysis processing procedure.
The second parsing module 606 is configured to obtain a second parser according to the error-reporting data, and perform a second parsing process on the video file by using the second parser to obtain second parsed data, where the first parsed data and the second parsed data are used to represent attributes of the video file.
And the splitting processing module 608 is configured to obtain an audio stream and a video stream in the video file according to the first parsing data and the second parsing data.
And the decoding processing module 610 is configured to decode the audio stream and the video stream by using a target decoder corresponding to the second parser, so as to obtain a decoded audio stream and a decoded video stream.
According to the video decoding device provided by the embodiment, after the video file is acquired, the first parsing process is performed on the video file through the first parser. When the first analysis processing reports errors, first analysis data and error reporting data are obtained in the first analysis processing process. And then acquiring a second analyzer according to the error reporting data, and processing the video file through the second analyzer to acquire second analysis data. And finally, acquiring an audio stream and a video stream in the video file according to the obtained first analysis data and second analysis data, and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer. In this way, when the first parsing process is in error, the video file can be parsed by acquiring the parser capable of correctly parsing the video file, and the video file is decoded according to the decoder corresponding to the acquired parser, so that the accuracy of decoding the video file is improved.
In one embodiment, the first parsing module 602 is further configured to obtain a video file, and obtain a file format of the video file; and when the file format is a preset file format, performing first analysis processing on the video file through a first analyzer corresponding to the preset file format.
In one embodiment, the second parser includes a second audio parser and a second video parser; the second parsing module 602 is further configured to obtain an audio format and a video format corresponding to the video file according to the error reporting data; acquiring a second audio analyzer according to the audio format, and acquiring the second video analyzer according to the video format; and performing second analysis processing on the video file through the second audio analyzer and the second video analyzer to obtain second analysis data.
In one embodiment, the target decoder includes a target audio decoder corresponding to a second audio parser and a target video decoder corresponding to the second video parser; the second parsing module 602 is further configured to perform audio decoding processing on the audio stream according to a target audio decoder corresponding to the second audio parser, so as to obtain a decoded audio stream; and performing video decoding processing on the video stream according to a target video decoder corresponding to the second video analyzer to obtain a decoded video stream.
In one embodiment, the target audio decoder comprises a first target audio decoder and a second target audio decoder; the second parsing module 602 is further configured to perform audio decoding processing on the audio stream according to a first target audio decoder corresponding to the second audio parser when the audio format is the first audio format, so as to obtain a decoded audio stream; and when the audio format is a second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio analyzer to obtain a decoded audio stream.
In one embodiment, the second parsing module 602 is further configured to obtain a first target video decoder corresponding to the second video parser according to a first configuration mode, and perform video decoding processing on the video stream according to the first target video decoder, so as to obtain a decoded video stream; or, obtaining a second target video decoder corresponding to the second video analyzer according to a second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
The division of the modules in the video decoding device is only used for illustration, and in other embodiments, the video decoding device may be divided into different modules as needed to complete all or part of the functions of the video decoding device.
For specific limitations of the video decoding apparatus, reference may be made to the above limitations of the video decoding method, and no further description is given here. The various modules in the video decoding apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The embodiment of the application also provides electronic equipment. As shown in fig. 7, for convenience of explanation, only the portions related to the embodiments of the present application are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as an example of the mobile phone:
fig. 7 is a block diagram of a part of a structure of a mobile phone related to an electronic device according to an embodiment of the present application. Referring to fig. 7, the mobile phone includes: radio Frequency (RF) circuitry 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuitry 760, wireless fidelity (wireless fidelity, wiFi) module 770, processor 780, power supply 790, and the like. It will be appreciated by those skilled in the art that the handset construction shown in fig. 7 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The RF circuit 710 may be configured to receive and transmit information or receive and transmit signals during a call, and may receive downlink information of a base station and process the downlink information with the processor 780; the uplink data may be transmitted to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (Low Noise Amplifier, LNAs), diplexers, and the like. In addition, the RF circuitry 710 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE)), email, short message service (Short Messaging Service, SMS), and the like.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing of the handset by running the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for at least one function (such as an application program of a sound playing function, an application program of an image playing function, etc.), and the like; the data storage area may store data (such as audio data, address book, etc.) created according to the use of the cellular phone, etc. In addition, memory 720 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 730 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset 700. In particular, the input unit 730 may include a touch panel 731 and other input devices 732. The touch panel 731, which may also be referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 731 or thereabout using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. In one embodiment, touch panel 731 may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 780, and can receive commands from the processor 780 and execute them. In addition, the touch panel 731 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 730 may include other input devices 732 in addition to the touch panel 731. In particular, the other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), and the like.
The display unit 740 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 740 may include a display panel 741. In one embodiment, the display panel 741 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, touch panel 731 may overlay display panel 741, and when touch panel 731 detects a touch operation thereon or thereabout, it is passed to processor 780 to determine the type of touch event, and processor 780 then provides a corresponding visual output on display panel 741 based on the type of touch event. Although in fig. 7, the touch panel 731 and the display panel 741 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 731 and the display panel 741 may be integrated to implement the input and output functions of the mobile phone.
The handset 700 may also include at least one sensor 750, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 741 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 741 and/or the backlight when the mobile phone moves to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the acceleration in all directions, the gravity and the direction can be detected when the motion sensor is static, and the motion sensor can be used for identifying the application of the gesture of a mobile phone (such as switching of a transverse screen and a vertical screen), vibration identification related functions (such as a pedometer and knocking) and the like; in addition, the mobile phone can be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and a cell phone. The audio circuit 760 may transmit the received electrical signal converted from audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 to be output; on the other hand, microphone 762 converts the collected sound signals into electrical signals, which are received by audio circuit 760 and converted into audio data, which are processed by audio data output processor 780, and then transmitted to another cell phone via RF circuit 710 or output to memory 720 for subsequent processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 770, so that wireless broadband Internet access is provided for the user. Although fig. 7 shows a WiFi module 770, it is to be understood that it is not a necessary component of the handset 700 and may be omitted as desired.
The processor 780 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions and processes of the mobile phone by running or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby performing overall monitoring of the mobile phone. In one embodiment, the processor 780 may include one or more processing units. In one embodiment, the processor 780 may integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, and the like; the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 780.
The handset 700 further includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 780 through a power management system, such as to provide for managing charging, discharging, and power consumption by the power management system.
In one embodiment, the handset 700 may also include a camera, bluetooth module, or the like.
In the embodiment of the present application, the steps of the video decoding method provided in the above embodiment are implemented when the processor 780 included in the electronic device executes a computer program stored on a memory.
Embodiments of the present application also provide a computer-readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the video decoding method provided by the above embodiments.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the video decoding method provided by the above embodiments.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (14)

1. A method of video decoding, the method comprising:
performing first analysis processing on the acquired video file through a first analyzer;
when the first analysis processing error reporting is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process; the first analysis data is used for representing part of attributes of the video file; the error reporting data is used for recording the reason why the error reporting occurs when the first analysis processing is carried out on the video file by the first analyzer;
acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file; the second analyzer is obtained by enhancing the first analyzer based on the error reporting data;
Acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
And decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.
2. The method of claim 1, wherein the performing, by the first parser, a first parsing process on the acquired video file comprises:
acquiring a video file and acquiring a file format of the video file;
and when the file format is a preset file format, performing first analysis processing on the video file through a first analyzer corresponding to the preset file format.
3. The method of claim 1, wherein the second parser comprises a second audio parser and a second video parser; the obtaining the second parser according to the error reporting data, and performing a second parsing process on the video file by the second parser to obtain second parsing data includes:
acquiring an audio format and a video format corresponding to the video file according to the error reporting data;
acquiring a second audio analyzer according to the audio format, and acquiring the second video analyzer according to the video format; and
And performing second analysis processing on the video file through the second audio analyzer and the second video analyzer to obtain second analysis data.
4. The method of claim 3, wherein the target decoder comprises a target audio decoder corresponding to a second audio parser and a target video decoder corresponding to the second video parser;
the decoding processing is performed on the audio stream and the video stream by the target decoder corresponding to the second parser, so as to obtain a decoded audio stream and a decoded video stream, which includes:
performing audio decoding processing on the audio stream according to a target audio decoder corresponding to the second audio analyzer to obtain a decoded audio stream;
and performing video decoding processing on the video stream according to a target video decoder corresponding to the second video analyzer to obtain a decoded video stream.
5. The method of claim 4, wherein the target audio decoder comprises a first target audio decoder and a second target audio decoder;
the audio decoding process is performed on the audio stream according to the target audio decoder corresponding to the second audio analyzer, so as to obtain a decoded audio stream, which includes:
When the audio format is a first audio format, performing audio decoding processing on the audio stream according to a first target audio decoder corresponding to the second audio analyzer to obtain a decoded audio stream;
and when the audio format is a second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio analyzer to obtain a decoded audio stream.
6. The method according to claim 4, wherein the performing video decoding processing on the video stream according to the target video decoder corresponding to the second video parser to obtain a decoded video stream includes:
acquiring a first target video decoder corresponding to the second video analyzer according to a first configuration mode, and performing video decoding processing on the video stream according to the first target video decoder to obtain a decoded video stream; or alternatively, the first and second heat exchangers may be,
and acquiring a second target video decoder corresponding to the second video analyzer according to a second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
7. A video decoding device, the device comprising:
the first analysis module is used for carrying out first analysis processing on the acquired video file through the first analyzer;
the error reporting module is used for acquiring first analysis data and error reporting data obtained in the first analysis processing process when the error reporting of the first analysis processing is detected; the error reporting data is used for recording the reason why the error reporting occurs when the first analysis processing is carried out on the video file by the first analyzer;
the second analysis module is used for acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file; the second analyzer is obtained by enhancing the first analyzer based on the error reporting data;
the distribution processing module is used for acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data;
and the decoding processing module is used for decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer to obtain a decoded audio stream and a decoded video stream.
8. The apparatus of claim 7, wherein the first parsing module is further configured to obtain a video file and obtain a file format of the video file; and when the file format is a preset file format, performing first analysis processing on the video file through a first analyzer corresponding to the preset file format.
9. The apparatus of claim 7, wherein the second parser comprises a second audio parser and a second video parser; the second analysis module is also used for acquiring an audio format and a video format corresponding to the video file according to the error reporting data; acquiring a second audio analyzer according to the audio format, and acquiring the second video analyzer according to the video format; and performing second analysis processing on the video file through the second audio analyzer and the second video analyzer to obtain second analysis data.
10. The apparatus of claim 9, wherein the target decoder comprises a target audio decoder corresponding to a second audio parser and a target video decoder corresponding to the second video parser; the decoding processing module is further used for performing audio decoding processing on the audio stream according to a target audio decoder corresponding to the second audio analyzer to obtain a decoded audio stream; and performing video decoding processing on the video stream according to a target video decoder corresponding to the second video analyzer to obtain a decoded video stream.
11. The apparatus of claim 10, wherein the target audio decoder comprises a first target audio decoder and a second target audio decoder; the decoding processing module is further configured to perform audio decoding processing on the audio stream according to a first target audio decoder corresponding to the second audio parser when the audio format is a first audio format, so as to obtain a decoded audio stream; and when the audio format is a second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio analyzer to obtain a decoded audio stream.
12. The apparatus of claim 10, wherein the decoding processing module is further configured to obtain a first target video decoder corresponding to the second video parser according to a first configuration mode, and perform video decoding processing on the video stream according to the first target video decoder, to obtain a decoded video stream; or, obtaining a second target video decoder corresponding to the second video analyzer according to a second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
13. An electronic device comprising a memory and a processor, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 6.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of the claims 1 to 6 below.
CN201880098495.2A 2018-11-29 2018-11-29 Video decoding method, device, electronic equipment and computer readable storage medium Active CN112823529B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/118296 WO2020107353A1 (en) 2018-11-29 2018-11-29 Video decoding method, device, electronic equipment, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112823529A CN112823529A (en) 2021-05-18
CN112823529B true CN112823529B (en) 2023-06-13

Family

ID=70851894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880098495.2A Active CN112823529B (en) 2018-11-29 2018-11-29 Video decoding method, device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112823529B (en)
WO (1) WO2020107353A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
CN101771869A (en) * 2008-12-30 2010-07-07 深圳市万兴软件有限公司 AV (audio/video) encoding and decoding device and method
CN103297761A (en) * 2013-06-03 2013-09-11 贝壳网际(北京)安全技术有限公司 Monitoring method and system for video analysis
CN103813177A (en) * 2012-11-07 2014-05-21 辉达公司 System and method for video decoding
CN104837052A (en) * 2014-06-10 2015-08-12 腾讯科技(北京)有限公司 Playing method of multimedia data and device
CN105992056A (en) * 2015-01-30 2016-10-05 腾讯科技(深圳)有限公司 Video decoding method and device
CN107172432A (en) * 2017-03-23 2017-09-15 杰发科技(合肥)有限公司 A kind of method for processing video frequency, device and terminal
CN108712654A (en) * 2018-05-18 2018-10-26 网宿科技股份有限公司 A kind of code-transferring method and equipment of audio/video flow

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731657B1 (en) * 2000-03-14 2004-05-04 International Business Machines Corporation Multiformat transport stream demultiplexor
US7061930B2 (en) * 2000-10-10 2006-06-13 Matsushita Electric Industrial Co., Ltd. Data selection/storage apparatus and data processing apparatus using data selection/storage apparatus
TWI384459B (en) * 2009-07-22 2013-02-01 Mstar Semiconductor Inc Method of frame header auto detection
CN102904857A (en) * 2011-07-25 2013-01-30 风网科技(北京)有限公司 Client video playing system and method thereof
CN103686210B (en) * 2013-12-17 2017-01-25 广东威创视讯科技股份有限公司 Method and system for achieving audio and video transcoding in real time
CN105744372A (en) * 2014-12-11 2016-07-06 深圳都好看互动电视有限公司 Video-on-demand broadcast method and system, server, and on-demand client
CN104980788B (en) * 2015-02-11 2018-08-07 腾讯科技(深圳)有限公司 Video encoding/decoding method and device
CN107302715A (en) * 2017-08-10 2017-10-27 北京元心科技有限公司 Multimedia file playing method, multimedia file packaging method, corresponding device and terminal
CN107801095B (en) * 2017-09-25 2019-12-13 平安普惠企业管理有限公司 audio and video decoding method and terminal equipment
CN107666620A (en) * 2017-09-26 2018-02-06 上海爱优威软件开发有限公司 A kind of terminal system layer decoder method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
CN101771869A (en) * 2008-12-30 2010-07-07 深圳市万兴软件有限公司 AV (audio/video) encoding and decoding device and method
CN103813177A (en) * 2012-11-07 2014-05-21 辉达公司 System and method for video decoding
CN103297761A (en) * 2013-06-03 2013-09-11 贝壳网际(北京)安全技术有限公司 Monitoring method and system for video analysis
CN104837052A (en) * 2014-06-10 2015-08-12 腾讯科技(北京)有限公司 Playing method of multimedia data and device
CN105992056A (en) * 2015-01-30 2016-10-05 腾讯科技(深圳)有限公司 Video decoding method and device
CN107172432A (en) * 2017-03-23 2017-09-15 杰发科技(合肥)有限公司 A kind of method for processing video frequency, device and terminal
CN108712654A (en) * 2018-05-18 2018-10-26 网宿科技股份有限公司 A kind of code-transferring method and equipment of audio/video flow

Also Published As

Publication number Publication date
WO2020107353A1 (en) 2020-06-04
CN112823529A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN107995519B (en) Method, device and storage medium for playing multimedia file
CN108988909B (en) Audio processing method and device, electronic equipment and computer readable storage medium
CN107247691B (en) Text information display method and device, mobile terminal and storage medium
US11202066B2 (en) Video data encoding and decoding method, device, and system, and storage medium
US10824901B2 (en) Image processing of face sets utilizing an image recognition method
CN112596848B (en) Screen recording method, device, electronic equipment, storage medium and program product
CN103873883B (en) Video playing method and device and terminal equipment
CN112148579B (en) User interface testing method and device
CN103678605A (en) Information transmission method and device and terminal device
CN109995743B (en) Multimedia file processing method and terminal
CN107484201B (en) Flow statistical method and device, terminal and computer readable storage medium
US10136115B2 (en) Video shooting method and apparatus
CN112823519B (en) Video decoding method, device, electronic equipment and computer readable storage medium
CN112689872A (en) Audio detection method, computer-readable storage medium and electronic device
CN110223221B (en) Dynamic image playing method and terminal equipment
CN113873187B (en) Cross-terminal screen recording method, terminal equipment and storage medium
CN106230919B (en) File uploading method and device
CN112823529B (en) Video decoding method, device, electronic equipment and computer readable storage medium
CN109388487B (en) Application program processing method and device, electronic equipment and computer readable storage medium
CN116302807A (en) System, method, electronic device and storage medium for monitoring device memory
CN112997507A (en) Audio system control method, device, terminal and computer readable storage medium
CN111124721A (en) Webpage processing method and device and electronic equipment
CN107566870B (en) Multimedia processing method, remote controller and storage medium
US20120165966A1 (en) Method and apparatus for outputting audio data
CN106358070B (en) Multimedia file uploading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant