CN114095778B - Audio hard decoding method of application-level player and display device - Google Patents
Audio hard decoding method of application-level player and display device Download PDFInfo
- Publication number
- CN114095778B CN114095778B CN202010862300.6A CN202010862300A CN114095778B CN 114095778 B CN114095778 B CN 114095778B CN 202010862300 A CN202010862300 A CN 202010862300A CN 114095778 B CN114095778 B CN 114095778B
- Authority
- CN
- China
- Prior art keywords
- audio
- decoding
- audio data
- hard
- decoded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000012545 processing Methods 0.000 claims abstract description 54
- 230000008569 process Effects 0.000 claims description 28
- 239000004744 fabric Substances 0.000 claims description 8
- 230000003068 static effect Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 33
- 230000002708 enhancing effect Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 67
- 238000004891 communication Methods 0.000 description 41
- 238000010586 diagram Methods 0.000 description 20
- 238000006243 chemical reaction Methods 0.000 description 12
- 238000005070 sampling Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000009877 rendering Methods 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4341—Demultiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4398—Processing of audio elementary streams involving reformatting operations of audio signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application discloses an audio hard decoding method and display equipment of an application level player, wherein the application level player is configured to create an audio hard decoder based on audio hard decoding parameters; and calling an audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data in a pbuffer structural body storage form, converting the decoded audio data in a pbuffer structural body storage form into decoded audio data in a avframe structural body storage form, and writing the decoded audio data into an audio decoded data queue to play the appointed audio and video file. Therefore, the method and the display device provided by the application can enable the application-level player to decode the audio data in a hard decoding mode, can preprocess the audio effect, and can decode the audio data into multiple channels through decoding, thereby achieving the effect of enhancing the audio effect.
Description
Technical Field
The present application relates to the field of audio decoding technologies, and in particular, to an audio hard decoding method and a display device for an application level player.
Background
With the rapid development of display devices, the functions of the display devices are more and more abundant, the performances of the display devices are more and more powerful, and currently, the display devices comprise intelligent televisions, double-screen laser televisions, intelligent set top boxes, intelligent display screens and the like.
In the existing display device, an application-level player is generally adopted to play an audio and video file (film source), and the application-level player decodes the audio data and the video data of the film source and then plays the audio and video data. Application level players, including ijkplayer, exoplayer, adoplayer, PPLIVETVPLAYER, etc., typically use ffmpeg (third party open source decapsulation decoding program), audio extractor (android native decapsulation program), or self-encapsulated decoders when decoding audio.
However, most of the film sources played by the application level player are network film sources, the audio coding format is single, the output of the sound channel decoded by adopting the decoding mode is generally 2 sound channels, and the sampling rate and the bit rate are low, so that a good audio-visual effect cannot be obtained when the audio-video file is played.
Disclosure of Invention
The application provides an audio hard decoding method and display equipment of an application-level player, which are used for solving the problem that an existing decoding mode cannot obtain a good audio-visual effect.
In a first aspect, the present application provides a display apparatus comprising:
a controller, the controller being configured with an application level player for playing a specified audio-video file, the application level player being configured to:
Acquiring audio hard decoding parameters and pre-decoding audio data, wherein the pre-decoding audio data refers to audio data obtained by performing unpacking processing on the appointed audio-video file, and the audio hard decoding parameters refer to parameters required by performing audio hard decoding processing on the pre-decoding audio data;
creating an audio hard decoder based on the audio hard decoding parameters;
Calling the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data;
Converting the decoded audio data into decoded audio data in a avframe structure storage form, wherein the avframe structure storage form is a storage form adopted by an application-level player;
and writing the decoded audio data in the avframe structural body storage form into an audio decoded data queue so as to play the appointed audio and video file.
In some embodiments of the application, the application level player, when executing the creating an audio hard decoder based on the audio hard decoding parameters, is further configured to:
creating an audio hard decoder;
Configuring the audio hard decoder based on the audio hard decoding parameters;
in an audio hard decoder configured with the audio hard decoding parameters, a hard decoding input thread and a hard decoding output thread are created.
In some embodiments of the present application, the application level player, when executing the calling the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data, is further configured to:
Invoking a hard decoding input thread in the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data;
and writing the decoded audio data into a hard decoding output thread in the audio hard decoder to obtain the decoded audio data in a pbuffer structural body storage form, wherein the pbuffer structural body storage form refers to a storage form adopted by the audio hard decoder.
In some embodiments of the application, the application level player, when executing the creating an audio hard decoder, is further configured to:
Acquiring operation environment parameters of the audio hard decoder, wherein the operation environment parameters are parameters required when the audio hard decoder is called;
compiling the running environment parameters to generate a cross-language calling file comprising function names;
Obtaining a function name corresponding to the audio hard decoding parameter, and matching the function name corresponding to the audio hard decoding parameter with the function name in the cross-language calling file;
when the function names match, an audio hard decoder is created.
In some embodiments of the application, the application level player, when executing the invoking the hard decoding input thread in the audio hard decoder, is further configured to:
invoking a hard decoding input thread in the audio hard decoder to acquire the audio data before decoding and an input buffer index;
And writing the audio data before decoding into the input buffer index for audio hard decoding processing to obtain the audio data after decoding.
In some embodiments of the application, the application level player, prior to executing the retrieving the input buffer index, is further configured to:
Judging whether the audio data need to be emptied or not based on user operation when the appointed audio and video file is played;
If the audio data need to be emptied, the audio data stored in the audio hard decoder are emptied;
if the audio data does not need to be emptied, the step of acquiring the input buffer index is performed.
In some embodiments of the present application, the application level player, when executing the hard decoding output thread that writes the decoded audio data into the audio hard decoder, obtains decoded audio data in the form of pbuffer fabric stores, is further configured to:
Invoking a hard decoding output thread in the audio hard decoder, and acquiring an output buffer index from the audio hard decoder;
And writing the decoded audio data into the output buffer index to obtain the decoded audio data in a pbuffer structural body storage form.
In some embodiments of the application, the application level player, upon performing the converting the decoded audio data into decoded audio data in avframe fabric storage form, is further configured to:
Acquiring audio data output format information and decoded audio data in a pbuffer structural body storage form, wherein the audio data output format information refers to information required by outputting in a avframe structural body storage form;
Acquiring an audio data offset from the decoded audio data in pbuffer-structure storage form;
Based on the audio data offset and the decoded audio data, obtaining real decoded audio data in a pbuffer structural body storage form;
Creating avframe a structure based on the true decoded audio data in pbuffer structure storage form;
Writing the audio data output format information into the avframe structural body to obtain decoded audio data in a avframe structural body storage form.
In some embodiments of the application, the application level player is further configured to:
calling a standard decoding interface to obtain a comparison table comprising decoding formats and bottom decoding names which are in one-to-one correspondence, wherein each bottom decoding name corresponds to an application chip;
determining a first bottom layer decoding name based on the decoding format corresponding to the comparison table and the audio hard decoding parameters;
obtaining a second bottom layer decoding name from the configured static file, and scoring the first bottom layer decoding name and the second bottom layer decoding name;
And determining the bottom layer decoding name with the highest score as a target bottom layer decoding name, and establishing connection with an application chip corresponding to the target bottom layer decoding name.
In a second aspect, the present application also provides an audio hard decoding method of an application level player, the method comprising:
Acquiring audio hard decoding parameters and pre-decoding audio data, wherein the pre-decoding audio data refers to audio data obtained by performing unpacking processing on the appointed audio-video file, and the audio hard decoding parameters refer to parameters required by performing audio hard decoding processing on the pre-decoding audio data;
creating an audio hard decoder based on the audio hard decoding parameters;
Calling the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data;
Converting the decoded audio data into decoded audio data in a avframe structure storage form, wherein the avframe structure storage form is a storage form adopted by an application-level player;
and writing the decoded audio data in the avframe structural body storage form into an audio decoded data queue so as to play the appointed audio and video file.
In a third aspect, the present application also provides a storage medium, where a program is stored, where the program can implement some or all of the steps in each embodiment of the audio hard decoding method including the application level player provided by the present application when executed.
As can be seen from the above technical solutions, according to the audio hard decoding method and display device for an application-level player provided by the embodiments of the present invention, the application-level player configured for the method and display device is used for obtaining audio hard decoding parameters and audio data before decoding, and creating an audio hard decoder based on the audio hard decoding parameters; and calling an audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data in a pbuffer structural body storage form, converting the decoded audio data in a pbuffer structural body storage form into decoded audio data in a avframe structural body storage form, and writing the decoded audio data into an audio decoded data queue to play the appointed audio and video file. Therefore, the method and the display device provided by the embodiment of the invention can enable the application-level player to decode the audio data in a hard decoding mode, can preprocess the sound effect, and can decode the audio data into multiple channels through decoding, thereby achieving the effect of enhancing the sound effect.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
A schematic diagram of an operational scenario between a display device and a control apparatus according to some embodiments is schematically shown in fig. 1;
a hardware configuration block diagram of a display device 200 according to some embodiments is exemplarily shown in fig. 2;
a hardware configuration block diagram of the control device 100 according to some embodiments is exemplarily shown in fig. 3;
a schematic diagram of the software configuration in a display device 200 according to some embodiments is exemplarily shown in fig. 4;
An icon control interface display schematic of an application in a display device 200 according to some embodiments is illustrated in fig. 5;
A flowchart of an audio hard decoding method of an application level player according to some embodiments is shown schematically in fig. 6;
A block diagram of an application level player according to some embodiments is shown schematically in fig. 7;
a method flow diagram for creating an audio hard decoder according to some embodiments is illustrated in fig. 8;
A data flow diagram of a hard decode input thread according to some embodiments is illustrated in fig. 9;
A flowchart of a method of execution of a hard decode output thread according to some embodiments is illustrated in fig. 10;
A data flow diagram of a hard decode output thread according to some embodiments is illustrated in fig. 11;
a method flow diagram for converting avframe a structure according to some embodiments is shown schematically in fig. 12;
a data flow diagram of a conversion avframe fabric according to some embodiments is shown schematically in fig. 13;
a flowchart of a method for multi-chip OMX layer decoding format compatibility according to some embodiments is illustrated in fig. 14.
Detailed Description
For the purposes of making the objects, embodiments and advantages of the present application more apparent, an exemplary embodiment of the present application will be described more fully hereinafter with reference to the accompanying drawings in which exemplary embodiments of the application are shown, it being understood that the exemplary embodiments described are merely some, but not all, of the examples of the application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the appended claims. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure can be practiced separately from the other aspects.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated (Unless otherwise indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this disclosure refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
The term "remote control" as used herein refers to a component of an electronic device (such as a display device as disclosed herein) that can be controlled wirelessly, typically over a relatively short distance. Typically, the electronic device is connected to the electronic device using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB, bluetooth, motion sensors, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in a general remote control device with a touch screen user interface.
The term "gesture" as used herein refers to a user behavior by which a user expresses an intended idea, action, purpose, and/or result through a change in hand shape or movement of a hand, etc.
A schematic diagram of an operational scenario between a display device and a control apparatus according to some embodiments is schematically shown in fig. 1. As shown in fig. 1, a user may operate the display apparatus 200 through the mobile terminal 300 and the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, etc., and the display device 200 is controlled by a wireless or other wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. Such as: the user can input corresponding control instructions through volume up-down keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, on-off keys, etc. on the remote controller to realize the functions of the control display device 200.
In some embodiments, mobile terminals, tablet computers, notebook computers, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device. The application program, by configuration, can provide various controls to the user in an intuitive User Interface (UI) on a screen associated with the smart device.
In some embodiments, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. Such as: it is possible to implement a control command protocol established between the mobile terminal 300 and the display device 200, synchronize a remote control keyboard to the mobile terminal 300, and implement a function of controlling the display device 200 by controlling a user interface on the mobile terminal 300. The audio/video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display device 200 is also in data communication with the server 400 via a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide a smart network television function of a computer support function, including, but not limited to, a network television, a smart television, an Internet Protocol Television (IPTV), etc., in addition to the broadcast receiving television function.
A hardware configuration block diagram of a display device 200 according to some embodiments is illustrated in fig. 2.
In some embodiments, at least one of the controller 250, the modem 210, the communicator 220, the detector 230, the input/output interface 255, the display 275, the audio output interface 285, the memory 260, the power supply 290, the user interface 265, and the external device interface 240 is included in the display apparatus 200.
In some embodiments, the display 275 is configured to receive image signals from the first processor output, and to display video content and images and components of the menu manipulation interface.
In some embodiments, display 275 includes a display screen assembly for presenting pictures, and a drive assembly for driving the display of images.
In some embodiments, the video content is displayed from broadcast television content, or alternatively, from various broadcast signals that may be received via a wired or wireless communication protocol. Or may display various image content received from a network communication protocol from a network server side.
In some embodiments, the display 275 is used to present a user-manipulated UI interface generated in the display device 200 and used to control the display device 200.
In some embodiments, depending on the type of display 275, a drive assembly for driving the display is also included.
In some embodiments, display 275 is a projection display and may further include a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator 220 may include at least one of a Wifi module 221, a bluetooth module 222, a wired ethernet module 223, or other network communication protocol module or a near field communication protocol module, and an infrared receiver.
In some embodiments, the display device 200 may establish control signal and data signal transmission and reception between the communicator 220 and the external control device 100 or the content providing device.
In some embodiments, the user interface 265 may be used to receive infrared control signals from the control device 100 (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is a signal that the display device 200 uses to capture or interact with the external environment.
In some embodiments, the detector 230 includes an optical receiver, a sensor for capturing the intensity of ambient light, a parameter change may be adaptively displayed by capturing ambient light, etc.
In some embodiments, the detector 230 may further include an image collector 232, such as a camera, a video camera, etc., which may be used to collect external environmental scenes, collect attributes of a user or interact with a user, adaptively change display parameters, and recognize a user gesture to implement a function of interaction with the user.
In some embodiments, the detector 230 may also include a temperature sensor or the like, such as by sensing ambient temperature.
In some embodiments, the display device 200 may adaptively adjust the display color temperature of the image. The display device 200 may be adjusted to display a colder color temperature shade of the image, such as when the temperature is higher, or the display device 200 may be adjusted to display a warmer color shade of the image when the temperature is lower.
In some embodiments, the detector 230 also includes a sound collector 231 or the like, such as a microphone, that may be used to receive the user's sound. Illustratively, a voice signal including a control instruction for a user to control the display apparatus 200, or an acquisition environmental sound is used to recognize an environmental scene type so that the display apparatus 200 can adapt to environmental noise.
In some embodiments, as shown in fig. 2, the input/output interface 255 is configured to enable data transfer between the controller 250 and external other devices or other controllers 250. Such as receiving video signal data and audio signal data of an external device, command instruction data, or the like.
In some embodiments, external device interface 240 may include, but is not limited to, the following: any one or more interfaces of a high definition multimedia interface HDMI interface, an analog or data high definition component input interface, a composite video input interface, a USB input interface, an RGB port, and the like can be used. The plurality of interfaces may form a composite input/output interface.
In some embodiments, as shown in fig. 2, the modem 210 is configured to receive the broadcast television signal by a wired or wireless receiving manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance, and demodulate the audio/video signal from a plurality of wireless or wired broadcast television signals, where the audio/video signal may include a television audio/video signal carried in a television channel frequency selected by a user, and an EPG data signal.
In some embodiments, the frequency point demodulated by the modem 210 is controlled by the controller 250, and the controller 250 may send a control signal according to the user selection, so that the modem responds to the television signal frequency selected by the user and modulates and demodulates the television signal carried by the frequency.
In some embodiments, the broadcast television signal may be classified into a terrestrial broadcast signal, a cable broadcast signal, a satellite broadcast signal, an internet broadcast signal, or the like according to a broadcasting system of the television signal. Or may be differentiated into digital modulation signals, analog modulation signals, etc., depending on the type of modulation. Or it may be classified into digital signals, analog signals, etc. according to the kind of signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like. In this way, the set-top box outputs the television audio and video signals modulated and demodulated by the received broadcast television signals to the main body equipment, and the main body equipment receives the audio and video signals through the first input/output interface.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command to select to display a UI object on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon. The user command for selecting the UI object may be an input command through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to a voice uttered by the user.
As shown in fig. 2, the controller 250 includes at least one of a random access Memory 251 (Random Access Memory, RAM), a Read-Only Memory 252 (ROM), a video processor 270, an audio processor 280, other processors 253 (e.g., a graphics processor (Graphics Processing Unit, GPU), a central processing unit 254 (Central Processing Unit, CPU), a communication interface (Communication Interface), and a communication Bus 256 (Bus), which connects the respective components.
In some embodiments, RAM 251 is used to store temporary data for the operating system or other on-the-fly programs, and in some embodiments ROM 252 is used to store various system boot instructions.
In some embodiments, ROM 252 is used to store a basic input output system, referred to as a basic input output system (Basic Input Output System, BIOS). The system comprises a drive program and a boot operating system, wherein the drive program is used for completing power-on self-checking of the system, initialization of each functional module in the system and basic input/output of the system.
In some embodiments, upon receipt of a power-on signal, the display device 200 power begins to boot, and the processor 254 executes system boot instructions in the ROM 252 to copy temporary data of the operating system stored in memory into the RAM 251 to facilitate booting or running the operating system. When the operating system is started, the processor 254 copies temporary data of various applications in memory to the RAM 251, and then facilitates the starting or running of the various applications.
In some embodiments, processor 254 is used to execute operating system and application program instructions stored in memory. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some example embodiments, the processor 254 may include a plurality of processors. The plurality of processors may include one main processor and one or more sub-processors. A main processor for performing some operations of the display apparatus 200 in the pre-power-up mode and/or displaying a picture in the normal mode. One or more sub-processors for one operation in a standby mode or the like.
In some embodiments, the graphics processor 253 is configured to generate various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer for rendering the various objects obtained by the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, video processor 270 is configured to receive external video signals, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to standard codec protocols for input signals, and may result in signals that are displayed or played on directly displayable device 200.
In some embodiments, video processor 270 includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio/video data stream, such as the input MPEG-2, and demultiplexes the input audio/video data stream into video signals, audio signals and the like.
And the video decoding module is used for processing the demultiplexed video signals, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display.
The frame rate conversion module is configured to convert the input video frame rate, for example, converting the 60Hz frame rate into the 120Hz frame rate or the 240Hz frame rate, and the common format is implemented in an inserting frame manner.
The display format module is used for converting the received frame rate into a video output signal, and changing the video output signal to a signal conforming to the display format, such as outputting an RGB data signal.
In some embodiments, the graphics processor 253 may be integrated with the video processor, or may be separately configured, where the integrated configuration may perform processing of graphics signals output to the display, and the separate configuration may perform different functions, such as gpu+frc (FRAME RATE Conversion) architecture, respectively.
In some embodiments, the audio processor 280 is configured to receive an external audio signal, decompress and decode the audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing, so as to obtain a sound signal that can be played in a speaker.
In some embodiments, video processor 270 may include one or more chips. The audio processor may also comprise one or more chips.
In some embodiments, video processor 270 and audio processor 280 may be separate chips or may be integrated together with the controller in one or more chips.
In some embodiments, the audio output, under the control of the controller 250, receives sound signals output by the audio processor 280, such as: the speaker 286, and an external sound output terminal that can be output to a generating device of an external device, other than the speaker carried by the display device 200 itself, such as: external sound interface or earphone interface, etc. can also include the close range communication module in the communication interface, for example: and the Bluetooth module is used for outputting sound of the Bluetooth loudspeaker.
The power supply 290 supplies power input from an external power source to the display device 200 under the control of the controller 250. The power supply 290 may include a built-in power circuit installed inside the display device 200, or may be an external power source installed in the display device 200, and a power interface for providing an external power source in the display device 200.
The user interface 265 is used to receive an input signal from a user and then transmit the received user input signal to the controller 250. The user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
In some embodiments, a user inputs a user command through the control apparatus 100 or the mobile terminal 300, the user input interface is then responsive to the user input through the controller 250, and the display device 200 is then responsive to the user input.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 275, and the user input interface receives the user input command through the Graphical User Interface (GUI). Or the user may input the user command by inputting a specific sound or gesture, the user input interface recognizes the sound or gesture through the sensor, and receives the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of a user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The memory 260 includes memory storing various software modules for driving the display device 200. Such as: various software modules stored in the first memory, including: at least one of a base module, a detection module, a communication module, a display control module, a browser module, various service modules, and the like.
The base module is a bottom software module for signal communication between the various hardware in the display device 200 and for sending processing and control signals to the upper modules. The detection module is used for collecting various information from various sensors or user input interfaces and carrying out digital-to-analog conversion and analysis management.
For example, the voice recognition module includes a voice analysis module and a voice instruction database module. The display control module is used for controlling the display to display the image content, and can be used for playing the multimedia image content, the UI interface and other information. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing data communication between the browsing servers. And the service module is used for providing various services and various application programs. Meanwhile, the memory 260 also stores received external data and user data, images of various items in various user interfaces, visual effect maps of focus objects, and the like.
Fig. 3 illustrates a block diagram of a configuration of the control device 100 according to some embodiments. As shown in fig. 3, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface, a memory, and a power supply.
The control device 100 is configured to control the display device 200, and may receive an input operation instruction of a user, and convert the operation instruction into an instruction recognizable and responsive to the display device 200, to function as an interaction between the user and the display device 200. Such as: the user responds to the channel addition and subtraction operation by operating the channel addition and subtraction key on the control apparatus 100, and the display apparatus 200.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications for controlling the display apparatus 200 according to user's needs.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may function similarly to the control device 100 after installing an application that manipulates the display device 200. Such as: the user may implement the functions of controlling the physical keys of the device 100 by installing various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM 113 and ROM 114, a communication interface 130, and a communication bus. The controller is used to control the operation and operation of the control device 100, as well as the communication collaboration among the internal components and the external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display device 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display device 200. The communication interface 130 may include at least one of a WiFi chip 131, a bluetooth module 132, an NFC module 133, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touchpad 142, a sensor 143, keys 144, and other input interfaces. Such as: the user can implement a user instruction input function through actions such as voice, touch, gesture, press, and the like, and the input interface converts a received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the corresponding instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display device 200. In some embodiments, an infrared interface may be used, as well as a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. And the following steps: when the radio frequency signal interface is used, the user input instruction is converted into a digital signal, and then the digital signal is modulated according to a radio frequency control signal modulation protocol and then transmitted to the display device 200 through the radio frequency transmission terminal.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an input-output interface 140. The control device 100 is provided with a communication interface 130 such as: the WiFi, bluetooth, NFC, etc. modules may send the user input instruction to the display device 200 through a WiFi protocol, or a bluetooth protocol, or an NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control device 200 under the control of the controller. The memory 190 may store various control signal instructions input by a user.
A power supply 180 for providing operating power support for the various elements of the control device 100 under the control of the controller. May be a battery and associated control circuitry.
In some embodiments, the system may include a Kernel (Kernel), a command parser (shell), a file system, and an application. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
A schematic diagram of the software configuration in the display device 200 according to some embodiments is schematically shown in fig. 4. Referring to FIG. 4, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (referred to as an "application layer"), an application framework layer (Application Framework) layer (referred to as a "framework layer"), a An Zhuoyun row layer (Android runtime) and a system library layer (referred to as a "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, a camera application, and the like; and may be an application program developed by a third party developer, such as a hi-see program, a K-song program, a magic mirror program, etc. In particular implementations, the application packages in the application layer are not limited to the above examples, and may actually include other application packages, which the embodiments of the present application do not limit.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. An application program can access resources in a system and acquire services of the system in execution through an API interface
As shown in fig. 4, the application framework layer in the embodiment of the present application includes a manager (Managers), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an activity manager (ACTIVITY MANAGER) is used to interact with all activities running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a package manager (PACKAGE MANAGER) for retrieving various information about the application packages currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage bracketing icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is to: the lifecycle of each application program is managed, as well as the usual navigation rollback functions, such as controlling the exit of the application program (including switching the currently displayed user interface in the display window to the system desktop), opening, backing (including switching the currently displayed user interface in the display window to the previous user interface of the currently displayed user interface), etc.
In some embodiments, the window manager is configured to manage all window procedures, such as obtaining a display screen size, determining whether there is a status bar, locking the screen, intercepting the screen, controlling display window changes (e.g., scaling the display window down, dithering, distorting, etc.), and so on.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, touch sensor, pressure sensor, etc.), and the like.
In some embodiments, the kernel layer further includes a power driver module for power management.
In some embodiments, the software programs and/or modules corresponding to the software architecture in fig. 4 are stored in the first memory or the second memory shown in fig. 2 or fig. 3.
In some embodiments, taking a magic mirror application (photographing application) as an example, when the remote control receiving device receives an input operation of the remote control, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the input operation into the original input event (including the value of the input operation, the timestamp of the input operation, etc.). The original input event is stored at the kernel layer. The application program framework layer acquires an original input event from the kernel layer, identifies a control corresponding to the input event according to the current position of the focus and takes the input operation as a confirmation operation, wherein the control corresponding to the confirmation operation is a control of a magic mirror application icon, the magic mirror application calls an interface of the application framework layer, the magic mirror application is started, and further, a camera driver is started by calling the kernel layer, so that a still image or video is captured through a camera.
In some embodiments, for a display device with a touch function, taking a split screen operation as an example, the display device receives an input operation (such as a split screen operation) acted on a display screen by a user, and the kernel layer may generate a corresponding input event according to the input operation and report the event to the application framework layer. The window mode (e.g., multi-window mode) and window position and size corresponding to the input operation are set by the activity manager of the application framework layer. And window management of the application framework layer draws a window according to the setting of the activity manager, then the drawn window data is sent to a display driver of the kernel layer, and the display driver displays application interfaces corresponding to the window data in different display areas of the display screen.
An icon control interface display schematic of an application in a display device 200 according to some embodiments is illustrated in fig. 5. In some embodiments, as shown in fig. 5, the application layer contains at least one icon control that the application can display in the display, such as: a live television application icon control, a video on demand application icon control, a media center application icon control, an application center icon control, a game application icon control, and the like.
In some embodiments, the live television application may provide live television via different signal sources. For example, a live television application may provide television signals using inputs from cable television, radio broadcast, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
In some embodiments, the video on demand application may provide video from different storage sources. Unlike live television applications, video-on-demand provides video displays from some storage sources. For example, video-on-demand may come from the server side of cloud storage, from a local hard disk storage containing stored video programs.
In some embodiments, the media center application may provide various multimedia content playing applications. For example, a media center may be a different service than live television or video on demand, and a user may access various images or audio through a media center application.
In some embodiments, an application center may be provided to store various applications. The application may be a game, an application, or some other application associated with a computer system or other device but which may be run in a smart television. The application center may obtain these applications from different sources, store them in local storage, and then be run on the display device 200.
In some embodiments, the display device is typically configured with an application level player, such as ijkplayer, exoplayer, adoplayer, PPLIVETVPLAYER, etc., when playing the audio and video files. When playing audio and video files by using an application level player, the audio and video files are usually unpacked, audio data, video data and subtitle data are decoded, and playing is realized after rendering.
Application level players typically use soft decoding during audio decoding, such as ffmpeg (third party open source decapsulation decoding procedure), audio extractor (android native decapsulation procedure), or self-encapsulated decoders. Because most of application level players play network film sources, the audio coding format is single, and therefore, the use of soft decoding has less influence on system expense.
However, with the development of network playing diversification and the underlying chip technology of display devices, better audio-visual effects need to be provided, more cpu and memory are occupied when audio decoding is performed through soft decoding, the channels are generally 2 channels, and the sampling rate and the bit rate are lower, so that the decoded audio-video files cannot achieve higher audio-visual effects. Therefore, the application player is required to use hard decoding (mediacodec) to achieve better audiovisual effects.
An application level player in the display device decodes the audio data in a hard decoding mode, can preprocess dts dolby (dolby) sound effects at a decoding end, decodes the audio data (2 channels before decoding) into 8 channels or 6 channels through decoding, and achieves the effect of enhancing the sound effects. In addition, with the development of audio and video technology, the audio coding technology is more complex, and the system overhead can be effectively reduced by using hard decoding, so as to prepare for backward compatibility.
To this end, an embodiment of the present invention provides a display device, including a controller, in which an application-level player for playing a specified audio/video file is configured. The display device can realize the audio decoding stream calling mediacodec (android native decoder) hard decoding function of the ijkplayer-based application-level player, increase mediacodec-end multi-channel output support, and simultaneously realize the audio decoding format diversity compatible support of the existing chips Nova, mtk and Hisi schemes.
A flowchart of an audio hard decoding method of an application level player according to some embodiments is illustrated in fig. 6. In the display device provided by the embodiment of the invention, when the audio hard decoding method of the application-level player is executed, the configured application-level player is configured to execute the following steps:
s1, acquiring audio hard decoding parameters and audio data before decoding, wherein the audio data before decoding refers to audio data obtained by performing unpacking processing on a specified audio/video file, and the audio hard decoding parameters refer to parameters required by performing audio hard decoding processing on the audio data before decoding.
When the display device plays the appointed audio and video file, the application level player needs to be started first, and parameters required by audio hard decoding processing of audio data in the audio and video file, namely audio hard decoding parameters, are set by the application level player through SetOption (the player independently performs the upper function interface).
In setting the audio hard decoding parameters, the setting may be made based on the user's selection, for example, if the user wants to achieve the best sound effect when playing a specified audio-video file, the option of dolby sound effect may be selected, at which time the audio hard decoding parameters that achieve dolby sound effect may be set.
A block diagram of an application level player according to some embodiments is illustrated in fig. 7. Referring to fig. 7, the application level player includes a download and decapsulation module, a pre-decoding data queue, a decoding module, a post-decoding data queue, and a rendering module.
The downloading and decapsulating module is used for downloading the film source of the appointed audio and video file, and decapsulating the appointed audio and video file to obtain the audio data, the video data and the subtitle data. The pre-decoding data queue is used for storing Audio data in a pre-decoding Audio data queue (Audio Pkg queue), storing Video data in a pre-decoding Video data queue (Video Pkg queue), and storing caption data in a pre-decoding caption data queue (Subtitle Pkg queue). The decoding module is used for hard decoding (MediaCodec) or soft decoding (avcodec) of the audio data, the video data and the caption data. The decoded data queue is used for placing decoded audio data into the decoded audio data queue (Asmq), placing decoded video data into the decoded video data queue (Vpicq), and placing decoded caption data into the decoded caption data queue (Subpicq). The rendering module is used for rendering the decoded video data, audio data and subtitle data, and playing the appointed audio-video file with higher audio effect.
In some embodiments, when the audio data is hard decoded, the application level player obtains the audio data before decoding obtained through the decapsulation and puts the audio data into the audio data before decoding queue so as to carry out the audio hard decoding process by the decoding module.
Since the decoding module has two decoding modes, it is necessary to select the decoding mode to be used after the application level player acquires the pre-decoding audio data.
In some embodiments, after performing the acquiring of the pre-decoding audio data, the application level player is further configured to: selecting a corresponding audio decoding mode for decoding the appointed audio and video file based on the audio hard decoding parameters; and if the selected audio decoding mode is an audio hard decoding mode, performing audio hard decoding processing on the appointed audio-video file.
Because the audio hard decoding parameters can identify whether the user selects to perform the audio enhancement processing, if the user selects to perform the audio enhancement processing when playing the specified audio and video file, for example, if dolby audio is selected, the audio hard decoding mode can be selected according to the audio hard decoding parameters so as to perform the audio hard decoding processing on the specified audio and video file.
S2, creating an audio hard decoder based on the audio hard decoding parameters.
After the application-level player stores the pre-decoding audio data in the pre-decoding audio data queue, the decoding module can perform audio hard decoding processing on the pre-decoding audio data stored in the pre-decoding audio data queue. The decoding module creates an audio hard decoder (Android MediaCodec) according to the audio hard decoding parameters when performing the audio hard decoding process. The audio hard decoder adopts an asynchronous decoding mode, so that a hard decoding input thread and a hard decoding output thread are required to be created in the audio hard decoder.
In some embodiments, the application level player, when executing the creation of the audio hard decoder based on the audio hard decoding parameters, is further configured to execute the steps of:
Step 21, creating an audio hard decoder.
Step 22, configuring the audio hard decoder based on the audio hard decoding parameters.
Step 23, creating a hard decoding input thread and a hard decoding output thread in an audio hard decoder configured with audio hard decoding parameters.
When the audio hard decoding process is carried out, the application level player calls the decoding module to create an audio hard decoder, and the audio hard decoder is used for carrying out the audio hard decoding process on the audio data before decoding.
In order to enable the audio data after the audio hard decoding processing to have parameters required by high sound effects such as higher sampling rate, bit rate, increased number of channels and the like, the audio hard decoder needs to be configured according to the audio hard decoding parameters. The audio hard decoding parameters include decoding related media parameters such as sampling rate, bit rate, channel number, audio format, etc. that can achieve high sound effects.
Meanwhile, in order to realize an asynchronous decoding mode, the direct dependency relationship generated before and after decoding is avoided, namely, if the decoded data cannot be obtained due to the occurrence of abnormality before decoding, the phenomenon of locking occurs, and the normal decoding processing process is influenced. Thus, in some embodiments, a hard decoding input thread and a hard decoding output thread are created in an audio hard decoder configured with audio hard decoding parameters.
The hard decoding input thread and the hard decoding output thread are not in direct connection, but are mutually independent, the hard decoding input thread stores the decoded audio data into the audio hard decoder, and when the audio data is required to be output, the hard decoding output thread acquires the decoded audio data from the audio hard decoder and then carries out subsequent operation. By adopting an asynchronous decoding mode, the situation that a certain thread is blocked due to abnormality can be avoided.
The decoding module is required to communicate with the audio hard decoder when creating the audio hard decoder and when invoking the hard decoding input thread and the hard decoding output thread in the audio hard decoder.
A method flow diagram for creating an audio hard decoder according to some embodiments is illustrated in fig. 8. To this end, in some embodiments, referring to fig. 8, the application level player, when executing the creation of the audio hard decoder, is further configured to:
S211, acquiring an operation environment parameter of the audio hard decoder, wherein the operation environment parameter is a parameter required when the audio hard decoder is called.
S212, compiling the operation environment parameters to generate a cross-language calling file comprising function names;
s213, obtaining the function name corresponding to the audio hard decoding parameter, and matching the function name corresponding to the audio hard decoding parameter with the function name in the cross-language calling file;
S214, when the function names are matched and consistent, an audio hard decoder is created.
If the decoding module in the application level player calls the audio hard decoder, the decoding module needs to have parameters related to the callable audio hard decoder, that is, the decoding module needs to have operation environment parameters of the audio hard decoder.
The application level player compiles the running environment parameters that can realize calling the audio hard decoder to generate a cross-language calling file (jni file). The cross-language calling file comprises a plurality of running environment parameters, and the running environment parameters of the same class can be used as a java class name, namely a function name, so that the cross-language calling file comprises a plurality of function names, and each function name corresponds to at least one running environment parameter capable of realizing calling the audio hard decoder. The names of functions in the cross-language call file include, but are not limited to: setting an audio media format method, starting a decoding method, acquiring an input buffer index method, acquiring output data, stopping the decoding method, and the like.
After setting parameters (audio hard decoding parameters) required for performing audio hard decoding processing on audio data in an audio/video file, the application level player stores a plurality of function names in a decoding module in the application level player, wherein the function names corresponding to the audio hard decoding parameters include, but are not limited to: setting an audio media format, starting decoding, acquiring an input buffer index, acquiring output data, stopping decoding, and the like.
Therefore, the function name corresponding to the audio hard decoding parameter is matched with the function name in the cross-language calling file, so that whether the decoding module in the application level player can create the audio hard decoder can be determined.
If the function name corresponding to the audio hard decoding parameter is matched with the function name in the cross-language calling file, a decoding module in the application level player can create an audio hard decoder through the jni layer, so that the decoding module in the application level player can call the audio hard decoder.
S3, calling the audio hard decoder to carry out audio hard decoding processing on the audio data before decoding to obtain decoded audio data.
After the creation of the audio hard decoder including the hard decoding input thread and the hard decoding output thread is completed, the audio hard decoding process can be performed on the audio data before decoding of the specified audio-video file. Since the audio data obtained after the audio hard decoding (MediaCodec) process is in the form of pbuffer structure (a unit of storage of decoded audio data), the decoded audio data in the form of pbuffer structure storage is obtained after the hard decoding input thread and the hard decoding output thread.
In some embodiments, the application level player, upon performing the invoking of the audio hard decoder to perform the audio hard decoding process on the pre-decoding audio data resulting in the post-decoding audio data, is further configured to:
step 301, calling a hard decoding input thread in an audio hard decoder to perform audio hard decoding processing on audio data before decoding to obtain decoded audio data;
step 302, writing the decoded audio data into a hard decoding output thread in an audio hard decoder to obtain decoded audio data in a pbuffer structural body storage form, wherein the pbuffer structural body storage form refers to a storage form adopted by the audio hard decoder.
Because the audio decoder comprises the hard decoding input thread and the hard decoding output thread, the hard decoding input thread is called to carry out audio hard decoding on the audio data of the decoder, and the decoded audio data is obtained. And storing the decoded audio data to a hard decoding output thread to obtain the decoded audio data in a pbuffer structural body storage form.
A data flow diagram of a hard decode input thread according to some embodiments is illustrated in fig. 9. In some embodiments, referring to fig. 9, the application level player, when executing step 301, i.e. invoking a hard decoding input thread in the audio hard decoder to perform audio hard decoding processing on the pre-decoding audio data, is further configured to execute the following steps:
Step 311, calling a hard decoding input thread in the audio hard decoder to obtain the audio data before decoding and the input buffer index.
Step 312, the audio data before decoding is written into the input buffer index to perform audio hard decoding processing, so as to obtain the audio data after decoding.
In the process of audio hard decoding, the application level player starts the audio hard decoder, namely, calls a hard decoding input thread to carry out hard decoding. And an exit zone bit is arranged in the hard decoding input thread, and if the exit zone bit is identified, the hard decoding input thread is exited. Therefore, when executing the hard decoding input thread, whether the exit flag bit exists in the hard decoding input thread is identified in real time, and if the exit flag bit is not identified, the hard decoding process can be executed, namely, the audio data before decoding and the input buffer index required by the hard decoding process are acquired.
Since the designated audio and video file is played while being decoded, a user may operate the designated audio and video file, for example, fast forward, fast reverse, etc., during the playing. When the user operates, redundant data is generated, and in order to ensure the effect of the hard decoding process, the redundant data needs to be processed before the input buffer index is acquired.
To this end, in some embodiments, the application level player, prior to performing the acquiring of the input buffer index, is further configured to perform the steps of:
step 3111, based on the user operation at the time of playing the specified audio/video file, determines whether or not the audio data needs to be emptied.
Step 3112, if the audio data needs to be emptied, the audio data stored in the audio hard decoder is emptied.
Step 3113, if no flushing of the audio data is required, execute the step of obtaining the input buffer index.
In some embodiments, the redundant data may be processed in a manner that clears the audio data. Therefore, if the application level player recognizes that the user generates a corresponding operation, such as fast-rewinding, fast-forwarding, etc., it may determine that the audio data needs to be emptied, and at this time, the audio data stored in the audio hard decoder needs to be emptied. The audio data stored in the audio decoder is the audio data obtained by the previous hard decoding process.
The hard decoding input thread is a process of circularly performing hard decoding processing on the audio data before decoding, namely, performing hard decoding processing once by the input buffer index every time one frame of the audio data before decoding is acquired, and storing the audio data after decoding into the hard decoding output thread.
If the user does not operate the playing of the appointed audio and video file, the user can acquire the input buffer index for hard decoding without emptying the audio data. And writing the audio data before decoding into an input buffer index for audio hard decoding processing to obtain the audio data after decoding.
The decoding module (SDL AMediaCodec) in the application level player needs to establish communication between the decoding module and the hard decoding input thread when writing the pre-decoding audio data into the input buffer index.
Since the decoding module in the application level player needs to be implemented through a jni (Java NATIVE INTERFACE: java cross-language middle module) layer when the audio hard decoder is called, that is, the decoding module in the application level player needs to use the jni layer for transmission when writing the audio data before decoding into the input buffer index.
To this end, in some embodiments, the application level player, in performing the audio hard decoding process of writing pre-decoding audio data into the input buffer index, is further configured to perform the steps of:
Step 3121, obtaining a function name corresponding to the audio data before decoding and a cross-language call file including the function name, where the cross-language call file refers to a file generated based on the running environment parameters of the audio hard decoder.
Step 3122, matching the function name corresponding to the audio data before decoding with the function name in the cross-language call file.
And 3123, when the function names match, writing the audio data before decoding into the input buffer index to perform audio hard decoding processing.
If the decoding module in the application level player is to call the hard decoding input thread in the audio hard decoder, the cross-language call file needs to be acquired first. The cross-language calling file is a file generated based on parameters required for calling the audio hard decoder when the application level player creates the audio hard decoder, and the specific generating process may refer to the implementation process of the foregoing steps S211 to S212, which is not described herein. After the file is generated, the file can be directly obtained and used when the audio hard decoder is subsequently called.
The cross-language calling file comprises function names corresponding to a plurality of running environment parameters, the decoding module in the application-level player stores function names corresponding to a plurality of audio data before decoding, and the function names corresponding to the audio data before decoding comprise but are not limited to: setting an audio media format, starting decoding, acquiring an input buffer index, acquiring output data, stopping decoding, and the like.
Therefore, the function name corresponding to the audio data before decoding is matched with the function name in the cross-language calling file, and whether the decoding module in the application level player can call the hard decoding input thread in the audio hard decoder can be determined.
If the function name corresponding to the audio data before decoding is matched with the function name in the cross-language calling file, the communication between the decoding module in the application level player and the audio hard decoder can be established through the jni layer, and the decoding module in the application level player can call the hard decoding input thread. Therefore, after the decoding module is communicated with the audio hard decoder, the decoding module in the application level player can write the audio data before decoding into the hard decoding input thread, specifically writes the input buffer index, and performs hard decoding processing to obtain the audio data after decoding.
The decoded audio data is stored in a hard decoding output thread for subsequent audio output. The audio hard decoder stores the decoded audio data in the form of pbuffer structures, and therefore, after storing the decoded audio data to the hard decoding output thread, the decoded audio data in the form of pbuffer structures can be obtained.
A flowchart of a method of execution of a hard decode output thread according to some embodiments is illustrated in fig. 10; a data flow diagram of a hard decode output thread according to some embodiments is illustrated in fig. 11. In some embodiments, referring to fig. 10 and 11, the application level player, upon executing step 302, i.e., writing the decoded audio data to a hard decoding output thread in an audio hard decoder, obtains the decoded audio data in the form of pbuffer fabric stores, is further configured to:
S321, calling a hard decoding output thread in the audio hard decoder, and acquiring an output buffer index from the audio hard decoder.
S322, writing the decoded audio data into an output buffer index to obtain the decoded audio data in a pbuffer structural body storage form.
In the process of audio hard decoding, after each hard decoding of audio data before one frame decoding is finished by a hard decoding input thread in an audio hard decoder, the obtained decoded audio data is stored in the audio hard decoder. When the output audio data is required to be rendered to realize the playing of the designated audio and video file, the application level player calls the hard decoding output thread to acquire the decoded audio data.
And an exit zone bit is arranged in the hard decoding output thread, and if the exit zone bit is identified, the hard decoding output thread is exited. Therefore, when the hard decoding output thread is executed, whether the exit zone bit exists in the hard decoding output thread is identified in real time, and if the exit zone bit is not identified, the process of acquiring the decoded audio data can be executed.
The audio hard decoder stores the decoded audio data in the output buffer index when storing the decoded audio data written in the input buffer index. For this purpose, a hard decoding output thread is started, an output buffer index is obtained from the audio hard decoder, and decoded audio data is stored into the output buffer index. Since the audio hard decoder stores audio data in the form of pbuffer structures, the decoded audio data is stored to the output buffer index, and then the decoded audio data in the form of pbuffer structures is obtained.
S4, converting the decoded audio data into the decoded audio data in a avframe structural body storage form, wherein the avframe structural body storage form refers to a storage form adopted by an application-level player.
Since the application level player generally uses ffmpeg as a decapsulation tool, the decapsulated audio storage data structure of the application level player is highly compatible with avframe (decoded audio storage unit) used by the ffmpeg own soft decoding module, so that the application level player needs to unify the decoded audio data into avframe format, that is, convert the decoded audio data in the pbuffer structure storage form into the decoded audio data in the avframe structure storage form, so as to ensure output of the decoded audio data.
A method flow diagram for converting avframe a structure according to some embodiments is shown schematically in fig. 12; a data flow diagram of a conversion avframe fabric according to some embodiments is illustrated in fig. 13. Referring to fig. 12 and 13, in some embodiments, the application level player, upon performing the conversion of the decoded audio data into decoded audio data in the form of avframe fabric stores, is further configured to perform the following steps:
S41, obtaining audio data output format information and decoded audio data in a pbuffer structural body storage form, wherein the audio data output format information refers to information required by outputting in the avframe structural body storage form.
S42, acquiring the audio data offset from the decoded audio data in the pbuffer structural body storage form.
S43, obtaining real decoded audio data in a pbuffer structural body storage form based on the audio data offset and the decoded audio data.
S44, creating avframe a structure based on the real decoded audio data in pbuffer structure storage form.
S45, writing the audio data output format information into the avframe structural body to obtain decoded audio data in a avframe structural body storage form.
After processing by the audio hard decoder, the resulting decoded audio data is typically in the form of pbuffer structures stored. In converting the decoded audio data in pbuffer-structure storage form into the decoded audio data in avframe-structure storage form, the audio data output format information required for converting the structure needs to be acquired first, and the audio data output format information includes information required for outputting in avframe-structure storage form. Meanwhile, decoded audio data in the form of pbuffer-structure storage is acquired from the output buffer index.
When the decoding module (SDL_ AMediaCodec) in the application level player obtains the decoded audio data obtained by the hard decoding process from the hard decoding output thread, communication between the decoding module and the hard decoding output thread needs to be established.
When the decoding module in the application level player calls the audio hard decoder, the decoding module in the application level player needs to be realized through a jni (Java NATIVE INTERFACE: java cross-language intermediate module) layer, namely, when the decoding module in the application level player obtains decoded audio data obtained by hard decoding processing from a hard decoding output thread (output buffer index), the decoding module needs to transmit by utilizing the jni layer.
In some embodiments, the application level player, upon performing the retrieval of the decoded audio data in pbuffer structure-based storage form, is further configured to perform the steps of:
step 411, obtain the function name corresponding to the decoded audio data and a cross-language call file including the function name, where the cross-language call file refers to a file generated based on the running environment parameters of the audio hard decoder.
Step 412, matching the function name corresponding to the decoded audio data with the function name in the cross-language calling file.
Step 413, when the function names match, obtaining decoded audio data in pbuffer structure storage form from the output buffer index of the hard decoding output thread.
If the decoding module in the application level player is to call the output buffer index of the hard decoding output thread in the audio hard decoder, the cross-language call file needs to be acquired first. The cross-language calling file is a file generated based on parameters required for calling the audio hard decoder when the application level player creates the audio hard decoder, and the specific generating process may refer to the implementation process of the foregoing steps S211 to S212, which is not described herein. After the file is generated, the file can be directly obtained and used when the audio hard decoder is subsequently called.
After the decoded audio data is obtained, a plurality of function names are stored in a decoding module in the application-level player, and the function names corresponding to the decoded audio data include, but are not limited to: setting an audio media format, starting decoding, acquiring an input buffer index, acquiring output data, stopping decoding, and the like. The cross-language calling file comprises function names corresponding to a plurality of running environment parameters, so that the function names corresponding to the decoded audio data and the function names in the cross-language calling file are matched, and whether a decoding module in the application level player can call an audio hard decoder or not can be determined, so that the decoded audio data can be obtained from an output buffer index in a hard decoding output thread.
If the function name corresponding to the decoded audio data is matched with the function name in the cross-language calling file, the communication between the decoding module in the application level player and the hard decoding output thread can be established through the jni layer, and the decoding module in the application level player can call the hard decoding output thread. Therefore, after the decoding module communicates with the hard decoding output thread, the decoding module in the application level player can obtain the decoded audio data in the pbuffer structure storage form from the output buffer index of the hard decoding output thread.
Since the decoded audio data obtained by the audio hard decoder through the hard decoding process may not be all valid data, it is necessary to acquire an audio data offset from the decoded audio data, and determine the real decoded audio data in the form of pbuffer-structure storage that can be output based on the audio data offset.
For example, if the audio data offset of the decoded audio data stored in the pbuffer structure is 10 bytes, then the data of the first 10 bytes is deleted from the first byte of the decoded audio data, and the data of the 11 th and subsequent bytes is the actual decoded audio data in the pbuffer structure storage form.
From the actual decoded audio data in pbuffer structure storage form, the avframe structure used for ffmpeg decapsulation is created, enabling the conversion of pbuffer structure to avframe structure.
And writing the audio data output format information into the avframe structural body to obtain decoded audio data in the avframe structural body storage form. The audio data output format information includes a time stamp, the number of channels, the number of samples, the sampling rate, etc., and for decoding the audio data into multiple channels, the audio data output format information corresponding to each channel is written in sequence to avframe structures based on the number of channels required for audio output.
After the writing of the audio data output format information of the specified number of channels is completed, when the number of channels in which data can be written is 0, decoded audio data in a avframe structural body storage form with multiple channels, high sampling number and high sampling rate can be obtained, and then the decoded audio data is written into a decoded audio data queue.
And S5, writing the decoded audio data in the avframe structural body storage form into an audio decoded data queue so as to play the appointed audio and video file.
After obtaining the decoded audio data in avframe structural body storage form required by the output of the application level player, in order to facilitate playing of the specified audio and video file, writing the decoded audio data in avframe structural body storage form into a decoded data queue, and specifically into the audio decoded data queue.
When the appointed audio and video file is played, the rendering module renders the decoded audio data in the decoded audio data queue in the decoded data queue, the decoded video data in the decoded video data queue and the decoded subtitle data in the decoded subtitle data queue, so that the appointed audio and video file is played.
Therefore, the display device provided by the embodiment of the invention can realize that the application-level player invokes a hard decoding function, decodes the audio data in a hard decoding mode, can preprocess dts dolby (dolby) sound effects at a decoding end, decodes the audio data (2 channels before decoding) into 8 channels or 6 channels through decoding, and achieves the effect of enhancing the sound effects.
Since different application chips, such as Nova, mtk, hisi, etc., can be configured in the display device. The audio formats of OMX (openmax. Android bottom layer butt joint decoding module) layers of different application chips are not uniform, and in order to ensure that the method for decoding audio data by using a hard decoding mode of an application-level player can be applied to the OMX layers of different application chips, the application-level player is required to have compatibility in the hard decoding mode.
A flowchart of a method for multi-chip OMX layer decoding format compatibility according to some embodiments is illustrated in fig. 14. Referring to fig. 14, in the display device provided by the embodiment of the present invention, on the basis of executing the foregoing audio hard decoding method of the application-level player, the application-level player is further configured to execute the following steps:
s61, calling a standard decoding interface to obtain a comparison table comprising decoding formats and bottom decoding names which are in one-to-one correspondence, wherein each bottom decoding name corresponds to one application chip.
S62, determining a first bottom layer decoding name based on a decoding format corresponding to the comparison table and the audio hard decoding parameters.
S63, obtaining a second bottom layer decoding name from the configured static file, and scoring the first bottom layer decoding name and the second bottom layer decoding name.
S64, determining the bottom layer decoding name with the highest score as a target bottom layer decoding name, and establishing connection with an application chip corresponding to the target bottom layer decoding name.
In order to realize the decoding format compatibility of the OMX layer of the multiple chips, the application level player calls a standard decoding interface (MediaCodecList. GetCodecinfo) to acquire all supported decoding formats and bottom layer decoding names (omxtype), and each decoding format corresponds to one bottom layer decoding name to form a comparison table. Each of the underlying decode names corresponds to an application chip.
And obtaining a decoding format corresponding to the audio hard decoding parameters, and searching omxtype corresponding to the input format in a comparison table through a fuzzy matching method, namely searching a first bottom layer decoding name corresponding to the decoding format corresponding to the audio hard decoding parameters in the comparison table.
Meanwhile, the configured static file is read, and the static file contains the decoding formats of the existing application chips and the corresponding omxtype, so that the second bottom layer decoding name corresponding to the decoding format corresponding to the audio hard decoding parameter can be determined.
And respectively scoring the first bottom layer decoding name and the second bottom layer decoding name, and finally taking the bottom layer decoding name omxtype with the highest score as a target bottom layer decoding name, wherein at the moment, the application-level player can establish connection with an application chip corresponding to the target bottom layer decoding name to realize multi-chip bottom layer difference compatibility.
Therefore, the display device provided by the embodiment of the invention can realize the audio decoding stream calling mediacodec (android native decoder) hard decoding function based on ijkplayer application-level player, increase mediacodec-end multi-channel output support, and simultaneously realize the provision of audio hard decoding format diversity compatible support for the existing chips Nova, mtk and Hisi.
As can be seen from the above technical solution, according to the audio hard decoding method and display device for an application-level player provided by the embodiments of the present invention, the application-level player configured for obtaining audio hard decoding parameters and audio data before decoding creates an audio hard decoder including a hard decoding input thread and a hard decoding output thread based on the audio hard decoding parameters; and calling a hard decoding input thread to perform audio hard decoding processing on the audio data before decoding, writing the obtained decoded audio data into the hard decoding output thread to obtain decoded audio data in a pbuffer structural body storage form, converting the decoded audio data in a pbuffer structural body storage form into decoded audio data in a avframe structural body storage form, and writing the decoded audio data into an audio decoding data queue to play the appointed audio and video file. Therefore, the method and the display device provided by the embodiment of the invention can enable the application-level player to decode the audio data in a hard decoding mode, can preprocess the sound effect, and can decode the audio data into multiple channels through decoding, thereby achieving the effect of enhancing the sound effect.
A flowchart of an audio hard decoding method of an application level player according to some embodiments is illustrated in fig. 6. Referring to fig. 6, the present application also provides an audio hard decoding method of an application level player, which is performed by the application level player in the display device provided by the foregoing embodiment, the method including:
S1, acquiring audio hard decoding parameters and audio data before decoding, wherein the audio data before decoding refers to audio data obtained by unpacking a specified audio and video file, and the audio hard decoding parameters refer to parameters required by audio hard decoding of the audio data before decoding;
s2, creating an audio hard decoder based on the audio hard decoding parameters;
s3, calling the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data;
S4, converting the decoded audio data into decoded audio data in a avframe structural body storage form, wherein the avframe structural body storage form is a storage form adopted by an application-level player;
and S5, writing the decoded audio data in the avframe structural body storage form into an audio decoded data queue so as to play the appointed audio and video file.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, where the program may include some or all of the steps in each embodiment of the audio hard decoding method of the application level player provided by the present invention when the program is executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory RAM), or the like.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for embodiments of the audio hard decoding method of the application level player, the description is relatively simple as it is substantially similar to the display device embodiment, as the relevant points are referred to in the description of the display device embodiment.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.
Claims (10)
1. A display device, characterized by comprising:
a controller, the controller being configured with an application level player for playing a specified audio-video file, the application level player being configured to:
Acquiring audio hard decoding parameters and pre-decoding audio data, wherein the pre-decoding audio data refers to audio data obtained by performing unpacking processing on the appointed audio-video file, and the audio hard decoding parameters refer to parameters required by performing audio hard decoding processing on the pre-decoding audio data;
creating an audio hard decoder based on the audio hard decoding parameters;
Calling the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data in a pbuffer structural body storage form, wherein the pbuffer structural body storage form refers to a storage form adopted by the audio hard decoder;
Converting the decoded audio data in pbuffer structural body storage form into decoded audio data in avframe structural body storage form, wherein the avframe structural body storage form is a storage form adopted by an application-level player;
and writing the decoded audio data in the avframe structural body storage form into an audio decoded data queue so as to play the appointed audio and video file.
2. The display device of claim 1, wherein the application level player, when executing the creating an audio hard decoder based on the audio hard decoding parameters, is further configured to:
creating an audio hard decoder;
Configuring the audio hard decoder based on the audio hard decoding parameters;
in an audio hard decoder configured with the audio hard decoding parameters, a hard decoding input thread and a hard decoding output thread are created.
3. The display device of claim 2, wherein the application level player, upon performing the invoking the audio hard decoder to perform an audio hard decoding process on the pre-decoded audio data to obtain the post-decoded audio data in the pbuffer fabric storage form, is further configured to:
invoking a hard decoding input thread in the audio hard decoder to carry out audio hard decoding processing on the audio data before decoding to obtain decoded audio data;
And writing the decoded audio data into a hard decoding output thread in the audio hard decoder to obtain the decoded audio data in a pbuffer structural body storage form.
4. The display device of claim 2, wherein the application level player, when executing the create audio hard decoder, is further configured to:
Acquiring operation environment parameters of the audio hard decoder, wherein the operation environment parameters are parameters required when the audio hard decoder is called;
compiling the running environment parameters to generate a cross-language calling file comprising function names;
Obtaining a function name corresponding to the audio hard decoding parameter, and matching the function name corresponding to the audio hard decoding parameter with the function name in the cross-language calling file;
when the function names match, an audio hard decoder is created.
5. A display device as recited in claim 3, wherein the application level player, upon executing the invoking a hard decode input thread in the audio hard decoder, performs an audio hard decode process on the pre-decode audio data, is further configured to:
invoking a hard decoding input thread in the audio hard decoder to acquire the audio data before decoding and an input buffer index;
And writing the audio data before decoding into the input buffer index for audio hard decoding processing to obtain the audio data after decoding.
6. The display device of claim 5, wherein the application level player, prior to executing the retrieving the input buffer index, is further configured to:
Judging whether the audio data need to be emptied or not based on user operation when the appointed audio and video file is played;
If the audio data need to be emptied, the audio data stored in the audio hard decoder are emptied;
if the audio data does not need to be emptied, the step of acquiring the input buffer index is performed.
7. A display device as claimed in claim 3, wherein the application level player, upon executing the hard decoding output thread that writes the decoded audio data into the audio hard decoder, is further configured to obtain the decoded audio data in the form of pbuffer structured body storage:
Invoking a hard decoding output thread in the audio hard decoder, and acquiring an output buffer index from the audio hard decoder;
And writing the decoded audio data into the output buffer index to obtain the decoded audio data in a pbuffer structural body storage form.
8. A display device as recited in claim 3, wherein the application level player, upon performing the converting the decoded audio data to decoded audio data in avframe fabric storage form, is further configured to:
Acquiring audio data output format information and decoded audio data in a pbuffer structural body storage form, wherein the audio data output format information refers to information required by outputting in a avframe structural body storage form;
Acquiring an audio data offset from the decoded audio data in pbuffer-structure storage form;
Based on the audio data offset and the decoded audio data, obtaining real decoded audio data in a pbuffer structural body storage form;
Creating avframe a structure based on the true decoded audio data in pbuffer structure storage form;
Writing the audio data output format information into the avframe structural body to obtain decoded audio data in a avframe structural body storage form.
9. The display device of claim 1, wherein the application level player is further configured to:
calling a standard decoding interface to obtain a comparison table comprising decoding formats and bottom decoding names which are in one-to-one correspondence, wherein each bottom decoding name corresponds to an application chip;
determining a first bottom layer decoding name based on the decoding format corresponding to the comparison table and the audio hard decoding parameters;
obtaining a second bottom layer decoding name from the configured static file, and scoring the first bottom layer decoding name and the second bottom layer decoding name;
And determining the bottom layer decoding name with the highest score as a target bottom layer decoding name, and establishing connection with an application chip corresponding to the target bottom layer decoding name.
10. A method for audio hard decoding of an application level player, the method comprising:
acquiring audio hard decoding parameters and pre-decoding audio data, wherein the pre-decoding audio data refers to audio data obtained by performing unpacking processing on a specified audio/video file, and the audio hard decoding parameters refer to parameters required by performing audio hard decoding processing on the pre-decoding audio data;
creating an audio hard decoder based on the audio hard decoding parameters;
Calling the audio hard decoder to perform audio hard decoding processing on the audio data before decoding to obtain decoded audio data in a pbuffer structural body storage form, wherein the pbuffer structural body storage form refers to a storage form adopted by the audio hard decoder;
Converting the decoded audio data in pbuffer structural body storage form into decoded audio data in avframe structural body storage form, wherein the avframe structural body storage form is a storage form adopted by an application-level player;
and writing the decoded audio data in the avframe structural body storage form into an audio decoded data queue so as to play the appointed audio and video file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010862300.6A CN114095778B (en) | 2020-08-25 | 2020-08-25 | Audio hard decoding method of application-level player and display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010862300.6A CN114095778B (en) | 2020-08-25 | 2020-08-25 | Audio hard decoding method of application-level player and display device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114095778A CN114095778A (en) | 2022-02-25 |
CN114095778B true CN114095778B (en) | 2024-05-28 |
Family
ID=80294952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010862300.6A Active CN114095778B (en) | 2020-08-25 | 2020-08-25 | Audio hard decoding method of application-level player and display device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114095778B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333933B (en) * | 2022-03-11 | 2022-05-20 | 北京麟卓信息科技有限公司 | Android application low-delay audio output method on Linux platform |
CN114567784B (en) * | 2022-04-24 | 2022-08-16 | 银河麒麟软件(长沙)有限公司 | VPU video decoding output method and system for Feiteng display card |
CN117714969B (en) * | 2023-07-11 | 2024-09-06 | 荣耀终端有限公司 | Sound effect processing method, device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8010692B1 (en) * | 2009-11-05 | 2011-08-30 | Adobe Systems Incorporated | Adapting audio and video content for hardware platform |
CN102904857A (en) * | 2011-07-25 | 2013-01-30 | 风网科技(北京)有限公司 | Client video playing system and method thereof |
CN104754349A (en) * | 2013-12-25 | 2015-07-01 | 炫一下(北京)科技有限公司 | Method and device for hardware decoding of audio/video |
CN105808198A (en) * | 2014-12-29 | 2016-07-27 | 乐视移动智能信息技术(北京)有限公司 | Audio file processing method and apparatus applied to android system and terminal |
CN106648537A (en) * | 2016-12-29 | 2017-05-10 | 维沃移动通信有限公司 | Audio data decoding control method and mobile terminal |
CN107393566A (en) * | 2017-07-15 | 2017-11-24 | 深圳酷旗互联网有限公司 | The audio-frequency decoding method and device of a kind of Intelligent story device |
-
2020
- 2020-08-25 CN CN202010862300.6A patent/CN114095778B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8010692B1 (en) * | 2009-11-05 | 2011-08-30 | Adobe Systems Incorporated | Adapting audio and video content for hardware platform |
CN102904857A (en) * | 2011-07-25 | 2013-01-30 | 风网科技(北京)有限公司 | Client video playing system and method thereof |
CN104754349A (en) * | 2013-12-25 | 2015-07-01 | 炫一下(北京)科技有限公司 | Method and device for hardware decoding of audio/video |
CN105808198A (en) * | 2014-12-29 | 2016-07-27 | 乐视移动智能信息技术(北京)有限公司 | Audio file processing method and apparatus applied to android system and terminal |
CN106648537A (en) * | 2016-12-29 | 2017-05-10 | 维沃移动通信有限公司 | Audio data decoding control method and mobile terminal |
CN107393566A (en) * | 2017-07-15 | 2017-11-24 | 深圳酷旗互联网有限公司 | The audio-frequency decoding method and device of a kind of Intelligent story device |
Also Published As
Publication number | Publication date |
---|---|
CN114095778A (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114095778B (en) | Audio hard decoding method of application-level player and display device | |
CN112019782B (en) | Control method and display device of enhanced audio return channel | |
CN112135180B (en) | Content display method and display equipment | |
CN111970549B (en) | Menu display method and display device | |
CN112165640B (en) | Display device | |
CN112153440B (en) | Display equipment and display system | |
CN112243141B (en) | Display method and display equipment for screen projection function | |
CN112328553A (en) | Thumbnail capturing method and display device | |
CN112087671A (en) | Display method and display equipment for control prompt information of input method control | |
CN111954043B (en) | Information bar display method and display equipment | |
CN114095769B (en) | Live broadcast low-delay processing method of application-level player and display device | |
CN112269668A (en) | Application resource sharing and display equipment | |
CN116017006A (en) | Display device and method for establishing communication connection with power amplifier device | |
CN112040340A (en) | Resource file acquisition method and display device | |
CN114079827A (en) | Menu display method and display device | |
CN113438553B (en) | Display device awakening method and display device | |
CN112363683B (en) | Method and display device for supporting multi-layer display by webpage application | |
CN114390190A (en) | Display equipment and method for monitoring application to start camera | |
CN111988646A (en) | User interface display method and display device of application program | |
CN112231088B (en) | Browser process optimization method and display device | |
CN112199612B (en) | Bookmark adding and combining method and display equipment | |
CN111970554B (en) | Picture display method and display device | |
CN112291600B (en) | Caching method and display device | |
CN115119029B (en) | Display equipment and display control method | |
CN112199064B (en) | Interaction method of browser application and system platform and display equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |