CN112823529A - Video decoding method, video decoding device, electronic equipment and computer readable storage medium - Google Patents

Video decoding method, video decoding device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112823529A
CN112823529A CN201880098495.2A CN201880098495A CN112823529A CN 112823529 A CN112823529 A CN 112823529A CN 201880098495 A CN201880098495 A CN 201880098495A CN 112823529 A CN112823529 A CN 112823529A
Authority
CN
China
Prior art keywords
video
audio
stream
parser
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880098495.2A
Other languages
Chinese (zh)
Other versions
CN112823529B (en
Inventor
胡小朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN112823529A publication Critical patent/CN112823529A/en
Application granted granted Critical
Publication of CN112823529B publication Critical patent/CN112823529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video decoding method, comprising: carrying out first analysis processing on the acquired video file through a first analyzer; when the error reporting of the first analysis processing is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process; acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file; acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.

Description

Video decoding method, video decoding device, electronic equipment and computer readable storage medium Technical Field
The present application relates to the field of computer technologies, and in particular, to a video decoding method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
The electronic equipment can display pictures, play music, play videos and the like. Generally, when a video file is generated, different encoding methods are adopted, and video files with different formats are generated. Before playing the encoded video file, the video file needs to be decoded first. Therefore, when playing video files with different formats, the adopted decoding modes are also different.
Disclosure of Invention
The embodiment of the application provides a video decoding method, a video decoding device, electronic equipment and a computer readable storage medium.
A video decoding method, comprising:
carrying out first analysis processing on the acquired video file through a first analyzer;
when the error reporting of the first analysis processing is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
and decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.
A video decoding apparatus, comprising:
the first analysis module is used for carrying out first analysis processing on the acquired video file through a first analyzer;
the error reporting module is used for acquiring first analysis data and error reporting data obtained in the first analysis processing process when the error reporting of the first analysis processing is detected;
the second analysis module is used for acquiring a second analyzer according to the error reporting data and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
the shunting processing module is used for acquiring audio streams and video streams in the video file according to the first analytic data and the second analytic data;
and the decoding processing module is used for decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
carrying out first analysis processing on the acquired video file through a first analyzer;
when the error reporting of the first analysis processing is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
and decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
carrying out first analysis processing on the acquired video file through a first analyzer;
when the error reporting of the first analysis processing is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
and decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a video decoding method in one embodiment;
FIG. 2 is a flow chart of a video decoding method in another embodiment;
FIG. 3 is a flow chart of a video decoding method in yet another embodiment;
FIG. 4 is a flow diagram that illustrates the parsing of a video file in one embodiment;
FIG. 5 is a flow diagram of a process for decoding a video file in one embodiment;
FIG. 6 is a block diagram of an exemplary video decoding apparatus;
fig. 7 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first resolver may be referred to as a second resolver, and similarly, a second resolver may be referred to as a first resolver, without departing from the scope of the present application. The first resolver and the second resolver are both resolvers, but they are not the same resolver.
FIG. 1 is a flow diagram of a video decoding method in one embodiment. As shown in fig. 1, the video decoding method includes steps 102 to 110. Wherein:
and 102, performing first analysis processing on the acquired video file through a first analyzer.
In one embodiment, the electronic device may encode the video file in various encoding modes, and the file formats of the video file encoded by different encoding modes are different. For example, common Video coding formats are WMV (Windows Media Video format), MPEG (Moving Pictures Experts Group), FFMPEG (Fast Forward Moving Pictures Experts Group).
Specifically, the video file includes an audio stream and a video stream, and the encoding mainly functions to compress video pixel data and audio data into a video code stream, thereby reducing the data amount of the video file. The electronic device may obtain a video file stored locally, or a received video file sent by another electronic device. For example, a user may start a third-party application program in the terminal, initiate an acquisition instruction of a video file through the third-party application program, then the terminal may initiate an acquisition request for acquiring the video file to the server according to the acquisition instruction, and the server sends the corresponding video file to the terminal after receiving the acquisition request.
After the electronic equipment acquires the video file, the first parser can be used for conducting first parsing processing on the video file. The parsing process refers to a process of acquiring attribute information of a video file, for example, attribute information of a playing time of a video stream, a size of a video image, an audio format of an audio stream, and the like in the video file is acquired through the parsing process. The electronic device may use an integrated parser to parse the video file, or may call an API (Application Programming Interface) of the third party parsing platform to parse the video file, which is not limited herein.
And 104, when the error reporting of the first analysis processing is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process.
The parsing modes of the video files in different formats are different, so that when the first parser performs the first parsing on the acquired video file, if the first parser does not support the parsing of the video file, an error is reported in the first parsing process. When the video file is analyzed through the first analyzer, first analysis data representing partial attributes of the video file can be obtained, and error reporting data can also be obtained.
The error data may be used to obtain the cause of the error in the first parsing process, for example, the error data may indicate that "the first parser cannot parse a video stream with Codec ID (Codec Identification) of V _ MS/VFW/FOURC", and it is known that the error occurs due to the format problem of the video stream according to the error data.
And 106, acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file.
When the first analysis processing is carried out according to the first analyzer and an error is reported, the reason of the error of the first analysis processing can be obtained according to the error reporting data, and therefore the second analyzer is obtained according to the error reporting data. And then carrying out second analysis processing on the video file according to the obtained second analyzer to obtain second analysis data.
For example, the first parser cannot parse the audio stream in the video file, determined from the error data to be due to audio format problems. The electronic device may then obtain a second parser that parses the audio stream in the audio format to parse the audio stream.
And 108, acquiring the audio stream and the video stream in the video file according to the first analysis data and the second analysis data.
In the embodiment provided by the present application, the related attributes of the video file may be obtained according to the obtained first analysis data and second analysis data, and the first analysis data and the second analysis data may respectively represent different attributes of the video file, or may represent the same attribute, which is not limited herein. Specifically, after obtaining the first analysis data and the second analysis data, the electronic device may call a decoder of a video file to decode the video file, so as to obtain an audio stream and a video stream in the video file.
And step 110, decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain a decoded audio stream and a decoded video stream.
In one embodiment, after the audio stream and the video stream are obtained, the audio stream and the video stream may be decoded by a target decoder corresponding to the second parser, so that the obtained decoded audio stream and video stream may be played in the electronic device. Specifically, when the electronic device acquires the second parser according to the error data, the electronic device may simultaneously acquire parameters of the target decoder. Therefore, when the video stream and the audio stream are decoded, the corresponding target decoder can be called directly through the acquired parameters to perform decoding processing.
According to the video decoding method provided by the above embodiment, after the video file is acquired, first parsing processing is performed on the video file through the first parser. When the first analysis processing is error-reporting, the first analysis data and the error-reporting data obtained in the first analysis processing process are obtained. And then acquiring a second analyzer according to the error reporting data, and performing second analyzer processing on the video file through the second analyzer to obtain second analysis data. And finally, acquiring the audio stream and the video stream in the video file according to the obtained first analysis data and second analysis data, and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer. Therefore, when the first analysis processing is wrong, the video file can be analyzed by acquiring the analyzer capable of correctly analyzing the video file, and the video file is decoded according to the decoder corresponding to the acquired analyzer, so that the accuracy of decoding the video file is improved.
Fig. 2 is a flow chart of a video decoding method in another embodiment. As shown in fig. 2, the video decoding method includes steps 202 to 218. Wherein:
step 202, acquiring a video file and acquiring a file format of the video file.
In an embodiment, the electronic device may read a video file from a locally stored folder, and may also receive a video file sent by another electronic device, which is not limited herein. For example, when the electronic device detects a play instruction, a file identifier corresponding to the play instruction is obtained, and then a video file corresponding to the file identifier is obtained. After acquiring the video file, the electronic device may acquire a file format of the video file.
Taking an Android system as an example, the media external API may be called to the bytes of the video file, after the bytes at the beginning of the video file are read, the mimetype (Multipurpose Internet Mail Extensions Type) of the video file may be obtained according to the read bytes, and the file format of the video file may be determined according to the read mimetype.
And 204, when the file format is the preset file format, performing first analysis processing on the video file through a first analyzer corresponding to the preset file format.
Specifically, the parser for parsing the video file corresponds to a file format of the video file, and when the file format of the read video file is a preset file format, the electronic device may call the first parser corresponding to the preset file format to perform first parsing on the video file. When the file format of the read video file is not the preset file format, a video analyzer corresponding to other file formats can be called to analyze the video file.
For example, when MIMETYPE of reading a video file is "video/x-matte", the video file is determined to be in the MKV (Multimedia Container) format. When the video file format is judged to be the MKV format, a MatroskaExtractor analyzer can be created to perform first analysis processing on the video file.
Step 206, when the error reporting of the first analysis processing is detected, acquiring the first analysis data and the error reporting data obtained in the first analysis processing process.
And step 208, acquiring an audio format and a video format corresponding to the video file according to the error reporting data.
The audio format corresponding to the video file represents the format of the audio stream in the video file, and the video format represents the format of the video stream in the video file. When the first analysis processing process reports an error, the system of the electronic equipment does not support the analysis of the video file in the file format. And obtaining the audio format and the video format corresponding to the video file according to the error reporting data.
And step 210, acquiring a second audio analyzer according to the audio format, and acquiring a second video analyzer according to the video format.
The second parser obtained in the electronic device comprises a second audio parser and a second video parser, the corresponding second audio parser can be obtained according to the audio format, and the corresponding second audio parser can be obtained according to the video format.
And step 212, performing second analysis processing on the video file through a second audio analyzer and a second video analyzer to obtain second analysis data.
After the second audio analyzer and the second video analyzer are obtained, second analysis processing can be performed on the video file through the second audio analyzer and the second video analyzer, and second analysis data are obtained. Specifically, the second audio parser and the second video parser may be respectively an added code block for performing second parsing processing on the video file, the second audio parser parses the video file to obtain attribute information corresponding to the audio stream, and the second video parser parses the video file to obtain attribute information corresponding to the video stream. And obtaining second analysis data according to the attribute information corresponding to the audio stream and the attribute information corresponding to the video stream.
Step 214, obtaining the audio stream and the video stream in the video file according to the first analytic data and the second analytic data.
And step 216, performing audio decoding processing on the audio stream according to the target audio decoder corresponding to the second audio parser to obtain a decoded audio stream.
After the audio stream and the video stream are obtained, the audio stream and the video stream may be decoded respectively according to a target decoder corresponding to the second parser, so as to obtain a decoded audio stream and a decoded video stream. The target decoder comprises a target audio decoder and a target video decoder, and audio decoding processing is carried out on the audio stream according to the target audio decoder to obtain a decoded audio stream. And carrying out video decoding processing on the video stream according to the target video decoder to obtain a decoded video stream.
Specifically, before the audio stream is decoded, the audio decoding parameters corresponding to the audio stream may be obtained. And then calling a corresponding target audio decoder according to the acquired audio decoding parameters, and performing audio decoding processing on the audio stream.
And step 218, performing video decoding processing on the video stream according to the target video decoder corresponding to the second video parser to obtain a decoded video stream.
Before the video stream is decoded, the video decoding parameters corresponding to the video stream may be obtained. And then calling a corresponding target video decoder according to the acquired video decoding parameters to perform video decoding processing on the video stream.
In an embodiment, the method for decoding a video stream may specifically include: acquiring a first target video decoder corresponding to the second video parser according to the first configuration mode, and performing video decoding processing on the video stream according to the first target video decoder to obtain a decoded video stream; or acquiring a second target video decoder corresponding to the second video parser according to the second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
For example, the first video decoder may be an MPEG4 video decoder and the second video decoder may be an FFMPEG video decoder. The video parameters configuring the MPEG4 video decoder and the FFMPEG video decoder are different, the called interfaces are different, and therefore the modes of acquiring the MPEG4 video decoder and the FFMPEG video decoder are different.
The decoded audio stream and the decoded video stream are corresponding, and the electronic device can read the decoded audio stream and the decoded video stream at the same time and output the decoded audio stream and the decoded video stream.
As shown in fig. 3, in the embodiment provided in the present application, the decoding process of the audio stream may further include:
step 302, when the audio format is the first audio format, performing audio decoding processing on the audio stream according to the first target audio decoder corresponding to the second audio parser to obtain a decoded audio stream.
When audio decoding an audio stream, different audio decoders are obtained according to the audio format. And when the audio format is the first audio format, acquiring a first target audio decoder to perform audio decoding processing on the audio stream to obtain a decoded audio stream.
And step 304, when the audio format is the second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio parser to obtain a decoded audio stream.
And when the audio format is the second audio format, acquiring a second target audio decoder to perform audio decoding processing on the audio stream to obtain a decoded audio stream. It is to be understood that the first target audio decoder and the second target audio decoder are obtained as decoders supporting decoding processing of audio streams in the first audio format and the second audio format, respectively.
According to the video decoding method provided by the embodiment, when the file format of the obtained video file is the preset file format, first, a first parser is used for performing first parsing on the video file. When the first analysis processing is error-reporting, the first analysis data and the error-reporting data obtained in the first analysis processing process are obtained. And then configuring a second analyzer according to the error reporting data, and performing second analyzer processing on the video file through the second analyzer to obtain second analysis data. And finally, acquiring the audio stream and the video stream in the video file according to the obtained first analysis data and second analysis data, and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer. Therefore, when the first analysis processing is wrong, the video file can be analyzed by configuring an analyzer capable of correctly analyzing the video file, and the video file is decoded according to a decoder corresponding to the configured analyzer, so that the accuracy of decoding the video file is improved.
FIG. 4 is a flow diagram that illustrates the parsing of a video file, under an embodiment. As shown in fig. 4, the process of parsing the video file may specifically include:
after the electronic device obtains the video file, a mediaextra parser may be invoked, step 402.
Step 404, parsing the video file by using the invoked mediaextra parser to obtain MIMETYPE of the video file.
Step 406, determining whether the MIMETYPE of the video file is "video/x-matte", and if the MIMETYPE of the read video file is "video/x-matte", determining that the file format of the video file is a preset file format, that is, the MKV (Multimedia Container) format, executing step 408; when the MIMETYPE of the video file is not "video/x-matte", it is determined that the file format of the video file is not the preset file format, and then step 438 is executed.
Step 408, a matroskaextra parser (first parser) is invoked to perform a first parsing process on the video file in the MKV format.
Step 410, detecting whether the first analysis processing is error-reported, when an error is reported in the first analysis processing, judging that the system does not support the analysis of the video file with the MKV format, acquiring first analysis data and error-reported data, and executing step 412; when the first parsing process does not report an error, it is determined that the system supports parsing of the video file in the MKV format, then step 438 is executed.
And step 412, enhancing the MatroskaExtractor analyzer in a mode of adding a second analyzer, and performing second analysis processing on the enhanced MatroskaExtractor analyzer to obtain second analysis data.
Specifically, after parsing, the decoder parameters may be configured through steps 414 to 434. Then, the electronic device can call the corresponding decoder according to the configured parameters of the decoder during the decoding process. It is understood that when configuring the decoder parameters, the configured parameters may be written into the video file to establish a corresponding relationship with the audio stream and the video stream. In the decoding process, after the electronic device reads the audio stream and the video stream, the electronic device can read the configured decoder parameters corresponding to the audio stream and the video stream, and then call the corresponding decoder to perform decoding processing according to the read decoder parameters.
Step 414, acquiring the audio format of the audio stream in the video file according to the error-reporting data, and if the audio stream "Codec ID is a _ AAC" and the coding profile is not "1", determining that the audio stream is in the first audio format, then executing step 416; when the audio stream "Codec ID is a _ AAC" and the encoding profile is "1", it is determined that the audio stream is in the second audio format, and step 420 is performed.
At step 416, first audio decoding parameters are configured.
In particular, the first target audio decoder may be an audio decoder of the system platform. The step of configuring the first audio decoding parameter may specifically include: MIMETYPE of the first target audio decoder is configured as "audio/mp 4 a-lam", and an audio ESDS field is added to the ACC audio decoder described above.
It is understood that, when the audio format of the audio stream is the first audio format, the first target audio decoder may be called to perform the audio decoding process according to the configured first audio decoding parameter by configuring the first audio decoding parameter in step 416.
At step 418, second audio decoding parameters are configured.
In one embodiment, the second target audio decoder may be an audio decoder of an FFMPEG system, and the step of configuring the second audio decoding parameter may specifically include: MIMETYPE of the second target audio decoder is configured as "audio/ffmpeg" and Codec ID is configured as "0 x 15002".
Specifically, when the audio format of the audio stream is the second audio format, the second audio decoding parameter is configured in step 418, and then the second target audio decoder may be called to perform the audio decoding process according to the configured second audio decoding parameter.
And step 420, acquiring the video format of the video stream in the video file as V _ MS/VFW/FOURCC according to the error reporting data.
Step 422, judging whether a first target video decoder is adopted to decode the video stream according to a predefined selection strategy, if so, executing step 424; if not, go to step 426.
The predefined selection policy may be a selection policy input by a user, or may be a policy automatically selected by the electronic device, which is not limited herein. For example, the electronic device defaults to use a first target video decoder, and when the remaining power of the electronic device is lower than the power threshold, the video decoding processing is performed on the video stream by using a second target video decoder.
Step 424, configure the first video decoding parameters.
The first target video decoder may be an MPEG4 video decoder native to the electronic device system, and the step of configuring the first video decoding parameter may specifically include: MIMETYPE of the first target video decoder is configured to "video/mp4v-es", and a video ESDS field is added.
At step 426, a second video decoding parameter is configured.
The second target video decoder may be an FFMPEG video decoder, and the step of configuring the second video decoding parameter may specifically include: MIMETYPE of the second target video decoder is configured to be "video/ffmpeg" and Codec ID is "0 xd".
When the video file is not the MKV video file, another video parser is created according to the file format of the video file, step 428.
And step 430, analyzing the video file according to the created other video analyzers.
FIG. 5 is a flow diagram that illustrates the processing of the decoding of a video file, in one embodiment. As shown in fig. 5, the process of decoding the video file may specifically include:
step 502, a MediaCodec decoder is called to obtain an audio stream and a video stream in the video file according to the first parsing data and the second parsing data.
In step 504, after the audio stream in the video file is acquired, it is determined whether the audio decoding parameter corresponding to the read audio stream is the first audio decoding parameter.
Specifically, it is determined whether MIMETYPE in the read audio decoding parameters is "audio/ffmpeg" and whether the encoding profile is "1", if not, step 506 is executed; if yes, executing the step.
Step 506, read the first audio decoding parameters, i.e. read the MIMETYPE corresponding to the audio stream as "audio/mp 4 a-latm" and the audio ESDS field.
Step 508, a first target audio decoder is called according to the first audio decoding parameter, and audio decoding processing is performed on the audio stream according to the first target audio decoder.
Specifically, when MIMETYPE in the first audio decoding parameter is "audio/mp 4 a-lam", the first target audio decoder native to the electronic device system may be called to perform the audio decoding process on the audio stream according to the first audio decoding parameter.
Step 510, reading the second audio decoding parameter, i.e. reading MIMETYPE corresponding to the audio stream as "audio/ffmpeg" and Codec ID as "0 x 15002".
Step 512, a second target audio decoder is called according to the second audio decoding parameter, and audio decoding processing is performed on the audio stream according to the second target audio decoder.
Specifically, when MIMETYPE in the second audio decoding parameter is "audio/FFMPEG" and Codec ID is "0 x 15002", an audio decoder of the FFMPEG system may be called as a second target audio decoder according to the second audio decoding parameter, and the audio decoding process may be performed on the audio stream according to the called second target audio decoder.
Step 514, determining whether to use the first target video decoder to decode the video stream according to a predefined selection policy, if yes, performing step 516; if not, go to step 520.
In step 516, the first video decoding parameters are read, i.e. the MIMETYPE corresponding to the video stream is read as "video/mp4v-es" and the video ESDS field.
Step 518, a first target video decoder is called according to the first video decoding parameter, and video decoding processing is performed on the video stream according to the first target video decoder.
Specifically, when MIMETYPE in the first video decoding parameters is "video/mp4v-es", a video decoder in the electronic device system native MPEG4 system may be called as the first target video decoder according to the first video decoding parameters, and video decoding processing may be performed on the video stream according to the called first target video decoder.
Step 520, reading the second video decoding parameter, that is, reading MIMETYPE corresponding to the video stream as "video/ffmpeg" and Codec ID as "0 xd".
Step 522, a second target video decoder is called according to the second video decoding parameter, and video decoding processing is performed on the video stream according to the second target video decoder.
Specifically, reading the MIMETYPE in the second video decoding parameter as "video/FFMPEG", reading the Codec ID as "0 xd", calling a video decoder in the FFMPEG system as a second target video decoder according to the second video decoding parameter, and performing video decoding processing on the video stream according to the called second target video decoder.
It should be understood that although the various steps in the flow charts of fig. 1-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
FIG. 6 is a block diagram of an exemplary video decoding apparatus. As shown in fig. 6, the video decoding apparatus 600 includes a first parsing module 602, an error reporting module 604, a second parsing module 606, a stream splitting module 608, and a decoding module 610. Wherein:
the first parsing module 602 is configured to perform a first parsing process on the obtained video file through a first parser.
An error reporting module 604, configured to obtain the first analysis data and the error reporting data obtained in the first analysis processing process when the error reporting of the first analysis processing is detected.
The second parsing module 606 is configured to obtain a second parser according to the error reporting data, and perform a second parsing process on the video file through the second parser to obtain second parsing data, where the first parsing data and the second parsing data are used to represent attributes of the video file.
And a streaming processing module 608, configured to obtain an audio stream and a video stream in the video file according to the first analytic data and the second analytic data.
And the decoding processing module 610 is configured to perform decoding processing on the audio stream and the video stream through a target decoder corresponding to the second parser to obtain a decoded audio stream and a decoded video stream.
According to the video decoding device provided in the above embodiment, after the video file is acquired, first, the first parser performs first parsing on the video file. When the first analysis processing is error-reporting, the first analysis data and the error-reporting data obtained in the first analysis processing process are obtained. And then acquiring a second analyzer according to the error reporting data, and performing second analyzer processing on the video file through the second analyzer to obtain second analysis data. And finally, acquiring the audio stream and the video stream in the video file according to the obtained first analysis data and second analysis data, and decoding the audio stream and the video stream through a target decoder corresponding to the second analyzer. Therefore, when the first analysis processing is wrong, the video file can be analyzed by acquiring the analyzer capable of correctly analyzing the video file, and the video file is decoded according to the decoder corresponding to the acquired analyzer, so that the accuracy of decoding the video file is improved.
In one embodiment, the first parsing module 602 is further configured to obtain a video file, and obtain a file format of the video file; and when the file format is a preset file format, carrying out first analysis processing on the video file through a first analyzer corresponding to the preset file format.
In one embodiment, the second parser comprises a second audio parser and a second video parser; the second parsing module 602 is further configured to obtain an audio format and a video format corresponding to the video file according to the error reporting data; acquiring a second audio analyzer according to the audio format, and acquiring a second video analyzer according to the video format; and carrying out second analysis processing on the video file through the second audio analyzer and the second video analyzer to obtain second analysis data.
In one embodiment, the target decoder includes a target audio decoder corresponding to a second audio parser and a target video decoder corresponding to the second video parser; the second parsing module 602 is further configured to perform audio decoding processing on the audio stream according to a target audio decoder corresponding to the second audio parser, so as to obtain a decoded audio stream; and performing video decoding processing on the video stream according to a target video decoder corresponding to the second video parser to obtain a decoded video stream.
In one embodiment, the target audio decoder comprises a first target audio decoder and a second target audio decoder; the second parsing module 602 is further configured to, when the audio format is the first audio format, perform audio decoding processing on the audio stream according to a first target audio decoder corresponding to the second audio parser to obtain a decoded audio stream; and when the audio format is a second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio parser to obtain a decoded audio stream.
In an embodiment, the second parsing module 602 is further configured to obtain a first target video decoder corresponding to the second video parser according to a first configuration manner, and perform video decoding processing on the video stream according to the first target video decoder to obtain a decoded video stream; or acquiring a second target video decoder corresponding to the second video parser according to a second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
The division of the modules in the video decoding apparatus is only for illustration, and in other embodiments, the video decoding apparatus may be divided into different modules as needed to complete all or part of the functions of the video decoding apparatus.
For specific limitations of the video decoding apparatus, reference may be made to the above limitations of the video decoding method, which is not described herein again. The respective modules in the above video decoding apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The embodiment of the application also provides the electronic equipment. As shown in fig. 7, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as the mobile phone as an example:
fig. 7 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application. Referring to fig. 7, the handset includes: radio Frequency (RF) circuit 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuit 760, wireless fidelity (WiFi) module 770, processor 780, and power supply 790. Those skilled in the art will appreciate that the handset configuration shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 710 may be used for receiving and transmitting signals during information transmission or communication, and may receive downlink information of a base station and then process the downlink information to the processor 780; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 720 may be used to store software programs and modules, and the processor 780 may execute various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 700. Specifically, the input unit 730 may include a touch panel 731 and other input devices 732. The touch panel 731, which may also be referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 731 by using a finger, a stylus, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 731 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch panel 731 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 730 may include other input devices 732 in addition to the touch panel 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
The display unit 740 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 740 may include a display panel 741. In one embodiment, the Display panel 741 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 731 can cover the display panel 741, and when the touch panel 731 detects a touch operation on or near the touch panel 731, the touch operation is transmitted to the processor 780 to determine the type of the touch event, and then the processor 780 provides a corresponding visual output on the display panel 741 according to the type of the touch event. Although the touch panel 731 and the display panel 741 are two independent components in fig. 7 to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 731 and the display panel 741 may be integrated to implement the input and output functions of the mobile phone.
The cell phone 700 may also include at least one sensor 750, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 741 and/or a backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and a cell phone. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, and then outputs the audio data to the processor 780 for processing, and then the processed audio data may be transmitted to another mobile phone through the RF circuit 710, or outputs the audio data to the memory 720 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 770, and provides wireless broadband Internet access for the user. Although fig. 7 shows WiFi module 770, it is understood that it does not belong to the essential components of handset 700 and may be omitted as desired.
The processor 780 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby integrally monitoring the mobile phone. In one embodiment, processor 780 may include one or more processing units. In one embodiment, processor 780 may integrate an application processor and a modem processor, where the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The handset 700 also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system that may be used to manage charging, discharging, and power consumption management.
In one embodiment, the cell phone 700 may also include a camera, a bluetooth module, and the like.
In the embodiment of the present application, the processor 780 included in the electronic device implements the steps of the video decoding method provided in the above embodiment when executing the computer program stored in the memory.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the video decoding methods provided by the above-described embodiments.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the video decoding method provided by the above embodiments.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

  1. A video decoding method, comprising:
    carrying out first analysis processing on the acquired video file through a first analyzer;
    when the error reporting of the first analysis processing is detected, acquiring first analysis data and error reporting data obtained in the first analysis processing process;
    acquiring a second analyzer according to the error reporting data, and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
    acquiring an audio stream and a video stream in the video file according to the first analysis data and the second analysis data; and
    and decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.
  2. The method according to claim 1, wherein the performing a first parsing process on the obtained video file by a first parser comprises:
    acquiring a video file and acquiring a file format of the video file;
    and when the file format is a preset file format, carrying out first analysis processing on the video file through a first analyzer corresponding to the preset file format.
  3. The method of claim 1, wherein the second parser comprises a second audio parser and a second video parser; the obtaining of a second parser according to the error reporting data, and performing a second parsing process on the video file by the second parser to obtain second parsing data includes:
    acquiring an audio format and a video format corresponding to the video file according to the error reporting data;
    acquiring a second audio analyzer according to the audio format, and acquiring a second video analyzer according to the video format; and
    and carrying out second analysis processing on the video file through the second audio analyzer and the second video analyzer to obtain second analysis data.
  4. The method of claim 3, wherein the target decoder comprises a target audio decoder corresponding to a second audio parser and a target video decoder corresponding to a second video parser;
    the decoding, by the target decoder corresponding to the second parser, the audio stream and the video stream to obtain a decoded audio stream and a decoded video stream, including:
    performing audio decoding processing on the audio stream according to a target audio decoder corresponding to the second audio parser to obtain a decoded audio stream;
    and performing video decoding processing on the video stream according to a target video decoder corresponding to the second video parser to obtain a decoded video stream.
  5. The method of claim 4, wherein the target audio decoder comprises a first target audio decoder and a second target audio decoder;
    the audio decoding processing is performed on the audio stream according to the target audio decoder corresponding to the second audio parser to obtain a decoded audio stream, including:
    when the audio format is a first audio format, performing audio decoding processing on the audio stream according to a first target audio decoder corresponding to the second audio parser to obtain a decoded audio stream;
    and when the audio format is a second audio format, performing audio decoding processing on the audio stream according to a second target audio decoder corresponding to the second audio parser to obtain a decoded audio stream.
  6. The method according to claim 4, wherein performing video decoding processing on the video stream according to a target video decoder corresponding to the second video parser to obtain a decoded video stream includes:
    acquiring a first target video decoder corresponding to the second video parser according to a first configuration mode, and performing video decoding processing on the video stream according to the first target video decoder to obtain a decoded video stream; or the like, or, alternatively,
    and acquiring a second target video decoder corresponding to the second video parser according to a second configuration mode, and performing video decoding processing on the video stream according to the second target video decoder to obtain a decoded video stream.
  7. A video decoding apparatus, comprising:
    the first analysis module is used for carrying out first analysis processing on the acquired video file through a first analyzer;
    the error reporting module is used for acquiring first analysis data and error reporting data obtained in the first analysis processing process when the error reporting of the first analysis processing is detected;
    the second analysis module is used for acquiring a second analyzer according to the error reporting data and performing second analysis processing on the video file through the second analyzer to obtain second analysis data, wherein the first analysis data and the second analysis data are used for representing the attribute of the video file;
    the shunting processing module is used for acquiring audio streams and video streams in the video file according to the first analytic data and the second analytic data;
    and the decoding processing module is used for decoding the audio stream and the video stream through a target decoder corresponding to the second parser to obtain the decoded audio stream and video stream.
  8. The apparatus of claim 7, wherein the first parsing module is further configured to obtain a video file and obtain a file format of the video file; and when the file format is a preset file format, carrying out first analysis processing on the video file through a first analyzer corresponding to the preset file format.
  9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 6.
  10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as set forth in any one of the claims 1 to 6 below.
CN201880098495.2A 2018-11-29 2018-11-29 Video decoding method, device, electronic equipment and computer readable storage medium Active CN112823529B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/118296 WO2020107353A1 (en) 2018-11-29 2018-11-29 Video decoding method, device, electronic equipment, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112823529A true CN112823529A (en) 2021-05-18
CN112823529B CN112823529B (en) 2023-06-13

Family

ID=70851894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880098495.2A Active CN112823529B (en) 2018-11-29 2018-11-29 Video decoding method, device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112823529B (en)
WO (1) WO2020107353A1 (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731657B1 (en) * 2000-03-14 2004-05-04 International Business Machines Corporation Multiformat transport stream demultiplexor
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
CN101771869A (en) * 2008-12-30 2010-07-07 深圳市万兴软件有限公司 AV (audio/video) encoding and decoding device and method
US20110022399A1 (en) * 2009-07-22 2011-01-27 Mstar Semiconductor, Inc. Auto Detection Method for Frame Header
CN103297761A (en) * 2013-06-03 2013-09-11 贝壳网际(北京)安全技术有限公司 Monitoring method and system for video analysis
CN103686210A (en) * 2013-12-17 2014-03-26 广东威创视讯科技股份有限公司 Method and system for achieving audio and video transcoding in real time
CN103813177A (en) * 2012-11-07 2014-05-21 辉达公司 System and method for video decoding
CN104837052A (en) * 2014-06-10 2015-08-12 腾讯科技(北京)有限公司 Playing method of multimedia data and device
CN104980788A (en) * 2015-02-11 2015-10-14 腾讯科技(深圳)有限公司 Video decoding method and device
CN105744372A (en) * 2014-12-11 2016-07-06 深圳都好看互动电视有限公司 Video-on-demand broadcast method and system, server, and on-demand client
CN105992056A (en) * 2015-01-30 2016-10-05 腾讯科技(深圳)有限公司 Video decoding method and device
CN107172432A (en) * 2017-03-23 2017-09-15 杰发科技(合肥)有限公司 A kind of method for processing video frequency, device and terminal
CN107302715A (en) * 2017-08-10 2017-10-27 北京元心科技有限公司 Multimedia file playing method, multimedia file packaging method, corresponding device and terminal
CN107666620A (en) * 2017-09-26 2018-02-06 上海爱优威软件开发有限公司 A kind of terminal system layer decoder method and system
CN108712654A (en) * 2018-05-18 2018-10-26 网宿科技股份有限公司 A kind of code-transferring method and equipment of audio/video flow

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7061930B2 (en) * 2000-10-10 2006-06-13 Matsushita Electric Industrial Co., Ltd. Data selection/storage apparatus and data processing apparatus using data selection/storage apparatus
CN102904857A (en) * 2011-07-25 2013-01-30 风网科技(北京)有限公司 Client video playing system and method thereof
CN107801095B (en) * 2017-09-25 2019-12-13 平安普惠企业管理有限公司 audio and video decoding method and terminal equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
US6731657B1 (en) * 2000-03-14 2004-05-04 International Business Machines Corporation Multiformat transport stream demultiplexor
CN101771869A (en) * 2008-12-30 2010-07-07 深圳市万兴软件有限公司 AV (audio/video) encoding and decoding device and method
US20110022399A1 (en) * 2009-07-22 2011-01-27 Mstar Semiconductor, Inc. Auto Detection Method for Frame Header
CN103813177A (en) * 2012-11-07 2014-05-21 辉达公司 System and method for video decoding
CN103297761A (en) * 2013-06-03 2013-09-11 贝壳网际(北京)安全技术有限公司 Monitoring method and system for video analysis
CN103686210A (en) * 2013-12-17 2014-03-26 广东威创视讯科技股份有限公司 Method and system for achieving audio and video transcoding in real time
CN104837052A (en) * 2014-06-10 2015-08-12 腾讯科技(北京)有限公司 Playing method of multimedia data and device
CN105744372A (en) * 2014-12-11 2016-07-06 深圳都好看互动电视有限公司 Video-on-demand broadcast method and system, server, and on-demand client
CN105992056A (en) * 2015-01-30 2016-10-05 腾讯科技(深圳)有限公司 Video decoding method and device
CN104980788A (en) * 2015-02-11 2015-10-14 腾讯科技(深圳)有限公司 Video decoding method and device
CN107172432A (en) * 2017-03-23 2017-09-15 杰发科技(合肥)有限公司 A kind of method for processing video frequency, device and terminal
CN107302715A (en) * 2017-08-10 2017-10-27 北京元心科技有限公司 Multimedia file playing method, multimedia file packaging method, corresponding device and terminal
CN107666620A (en) * 2017-09-26 2018-02-06 上海爱优威软件开发有限公司 A kind of terminal system layer decoder method and system
CN108712654A (en) * 2018-05-18 2018-10-26 网宿科技股份有限公司 A kind of code-transferring method and equipment of audio/video flow

Also Published As

Publication number Publication date
WO2020107353A1 (en) 2020-06-04
CN112823529B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN108320744B (en) Voice processing method and device, electronic equipment and computer readable storage medium
CN107995519B (en) Method, device and storage medium for playing multimedia file
CN107040609B (en) Network request processing method and device
WO2015154670A1 (en) Method and apparatus for invoking application programming interface
EP3493113B1 (en) Image processing method, computer device, and computer readable storage medium
CN106371964B (en) Method and device for prompting message
CN112596848B (en) Screen recording method, device, electronic equipment, storage medium and program product
CN109284144B (en) Fast application processing method and mobile terminal
CN107329778B (en) System updating method and related product
WO2018103441A1 (en) Network positioning method and terminal device
CN106203228A (en) Two-dimensional code information transmission method, device and equipment
WO2018161540A1 (en) Fingerprint registration method and related product
CN104809055B (en) Application program testing method and device based on cloud platform
CN112689872B (en) Audio detection method, computer-readable storage medium and electronic device
CN109062648B (en) Information processing method and device, mobile terminal and storage medium
CN112823519B (en) Video decoding method, device, electronic equipment and computer readable storage medium
US9826568B2 (en) Method, system and computer-readable storage medium for reducing data transmission delay
US20160330423A1 (en) Video shooting method and apparatus
CN106230919B (en) File uploading method and device
WO2018103440A1 (en) Network positioning method and terminal device
CN109388487B (en) Application program processing method and device, electronic equipment and computer readable storage medium
CN111177612B (en) Page login authentication method and related device
CN109511139B (en) WIFI control method and device, mobile device and computer-readable storage medium
CN109041212B (en) Positioning method and wearable device
CN112997507A (en) Audio system control method, device, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant