CN115329122A - Audio information processing method, audio information presenting method and device - Google Patents

Audio information processing method, audio information presenting method and device Download PDF

Info

Publication number
CN115329122A
CN115329122A CN202110513496.2A CN202110513496A CN115329122A CN 115329122 A CN115329122 A CN 115329122A CN 202110513496 A CN202110513496 A CN 202110513496A CN 115329122 A CN115329122 A CN 115329122A
Authority
CN
China
Prior art keywords
audio
information
track data
audio information
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110513496.2A
Other languages
Chinese (zh)
Inventor
杨宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110513496.2A priority Critical patent/CN115329122A/en
Publication of CN115329122A publication Critical patent/CN115329122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/61Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification

Abstract

The invention provides an audio information processing method, an audio information presenting device, electronic equipment and a storage medium, wherein the method comprises the following steps: responding to the dynamic modification instruction, and dynamically modifying the first audio track data to obtain second audio track data; responding to an audio service output instruction, acquiring audio information from an audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information; extracting, by an audio frame data reader, a first audio frame in second audio track data corresponding to a target timestamp stored in audio information; the first audio frames corresponding to different target timestamps are combined to obtain and output second audio frames, so that the audio service output instruction is responded through the second audio frames, real-time modification and flexible control over audio information can be realized, the audio information processing process responding to the audio service output instruction is simpler and more convenient, and the convenience of audio information processing is improved.

Description

Audio information processing method, audio information presenting method and device
Technical Field
The present invention relates to audio information processing technologies, and in particular, to an audio information processing method, an audio information presenting method, an apparatus, an electronic device, and a storage medium.
Background
In the related art, the forms of the audio information are various, the processing demand of the audio information is increased explosively, but due to the complex storage process of the audio information, the audio information cannot be modified and flexibly controlled in real time when being processed, so that the processing process of the audio information of a user is more complicated.
Disclosure of Invention
In view of this, embodiments of the present invention provide an audio information processing method, an audio information presenting device, an electronic device, and a storage medium, which can implement real-time modification and flexible control on audio information, so that an audio information processing process responding to an audio service output instruction is simpler and more convenient, and convenience in processing audio information is improved.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an audio information processing method, which comprises the following steps:
analyzing and processing the template information of the audio service processing environment to obtain first audio track data;
saving the first audio track data into an audio information storage hash table, wherein the audio information storage hash table is used for saving audio information, and the first audio track data is stored in the audio information;
responding to a dynamic modification instruction, and dynamically modifying the first audio track data to obtain second audio track data;
responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information;
extracting, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information;
and combining the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame.
The embodiment of the invention also provides an audio information presentation method, which comprises the following steps:
displaying a user interface, and presenting a task control function item in the user interface, wherein the task control function item is used for dynamically modifying first audio track data through a dynamic modification instruction triggered by a condition;
responding to the trigger operation aiming at the task control function item, and acquiring animation special effect information containing an audio service output instruction;
acquiring a second audio frame corresponding to the audio service output instruction;
presenting the animated special effects information and the second audio frame in the user interface.
An embodiment of the present invention further provides an audio information processing apparatus, where the apparatus includes:
the first information transmission module is used for analyzing and processing the template information of the audio service processing environment to obtain first audio track data;
the first information processing module is used for storing the first audio track data into an audio information storage hash table, wherein the audio information storage hash table is used for storing audio information, and the audio information is used for storing the first audio track data;
the first information processing module is used for responding to a dynamic modification instruction and dynamically modifying the first audio track data to obtain second audio track data;
the first information processing module is used for responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information;
the first information processing module is configured to extract, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information;
and the first information processing module is used for performing combined processing on the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame.
In the above-mentioned scheme, the first step of the method,
the first information processing module is used for analyzing the template information of the audio service processing environment and acquiring the time sequence information of the template information;
the first information processing module is used for analyzing the audio parameters corresponding to the template information according to the time sequence information of the template information, and acquiring the audio type and the audio track information parameters corresponding to the template information;
the first information processing module is configured to extract the template information based on the audio type and the audio track information parameter corresponding to the template information, so as to obtain first audio track data corresponding to the template information.
In the above-mentioned scheme, the first step of the method,
the first information processing module is used for extracting audio data from the audio resource component of the template information to construct first audio information when the audio type is a single audio;
the first information processing module is used for extracting audio data from a multimedia information component of the template information and constructing second audio information when the audio type is audio matched with the video information;
the first information processing module is used for extracting audio data from the animation resource component of the template information and constructing third audio information when the audio type is the audio matched with the animation resource;
the first information processing module is configured to combine the first audio information, the second audio information, and the third audio information to obtain first audio track data corresponding to the template information.
In the above-mentioned scheme, the first and second light sources,
the first information processing module is configured to receive the dynamic modification instruction, where the dynamic modification instruction includes at least one of:
adjusting the play initial position, pausing play, continuing play, addressing play, triggering conditions and playing scripts of the audio track data;
the first information processing module is used for dynamically modifying the first audio track data in the audio information storage hash table according to the type of the dynamic modification instruction to obtain second audio track data;
and the first information processing module is used for responding to a dynamic modification instruction and modifying the audio information storage hash table to obtain an audio information storage hash table corresponding to the second audio track data.
In the above-mentioned scheme, the first step of the method,
the first information processing module is used for configuring a corresponding first audio frame data reader for the audio information when responding to an audio service output instruction for the first time and acquiring the audio information from the audio information storage hash table;
the first information processing module is used for monitoring the continuous state of the audio information, keeping the continuous state of the first audio frame data reader when the audio information exists continuously, and updating data information in the audio information;
the first information processing module is used for deleting the first audio frame data reader when the audio information is removed, and configuring a second audio frame data reader according to the change of the audio information.
In the above-mentioned scheme, the first step of the method,
the first information processing module is used for determining a target time parameter matched with the dynamic modification instruction when the dynamic modification instruction is addressing playing or the playing starting position is adjusted to the starting position, and storing the target time parameter in an audio information storage hash table;
the first information processing module is configured to, when the second audio frame is output, compare the target time parameter with the target timestamp, and determine a timestamp comparison result;
and the first information processing module is configured to trigger an addressing and playing process based on the timestamp comparison result, so that the target time parameter and the target timestamp are kept synchronous when the second audio frame is output.
In the above-mentioned scheme, the first step of the method,
and the first information processing module is used for outputting a null data frame matched with the target timestamp as the second audio frame when the template information of the audio service processing environment is analyzed and processed and the first audio track data is not obtained.
In the above-mentioned scheme, the first step of the method,
the first information processing module is configured to, when the dynamic modification instruction is to adjust a playing rate of an audio frame, adjust first audio track data in the audio information by using the audio frame data reader to obtain an audio frame playing rate matched with the dynamic modification instruction;
and the first information processing module is used for adjusting the first audio track data in the audio information through the audio frame data reader to obtain the volume of the audio frame matched with the dynamic modification instruction when the dynamic modification instruction is used for adjusting the volume of the audio frame.
An embodiment of the present invention further provides an audio information presenting apparatus, where the apparatus includes:
the second information processing module is used for displaying a user interface and presenting a task control function item in the user interface, wherein the task control function item is used for dynamically modifying the first audio track data through a dynamic modification instruction triggered by a condition;
the second information processing module is used for responding to the triggering operation aiming at the task control function item and acquiring animation special effect information containing an audio service output instruction;
the second information processing module is used for acquiring a second audio frame corresponding to the audio service output instruction;
the second information processing module is configured to present the animation special effect information and the second audio frame in the user interface.
In the above-mentioned scheme, the first and second light sources,
the second information processing module is used for displaying a user interface and presenting a task control function item in the user interface, wherein the task control function item is used for dynamically modifying the first audio track data through a dynamic modification instruction played by addressing;
the second information processing module is used for responding to the triggering operation aiming at the task control function item and acquiring the lyric special effect information containing the audio service output instruction;
the second information processing module is used for acquiring a second audio frame corresponding to the audio service output instruction;
the second information processing module is used for presenting the lyric special effect information and the second audio frame in the user interface.
In the above-mentioned scheme, the first step of the method,
the second information processing module is used for responding to the viewing operation aiming at the task control function item, presenting a content page comprising the template information of the audio service processing environment and presenting at least one interactive function item in the content page, wherein the interactive function item is used for realizing the interaction with the audio service processing environment;
the second information processing module is used for receiving the interactive operation aiming at the audio service processing environment triggered based on the interactive function item so as to execute a corresponding interactive instruction.
In the above-mentioned scheme, the first and second light sources,
the second information processing module is used for presenting first interaction prompt information in the content page, and the first interaction prompt information is used for prompting that the interaction content corresponding to the interaction operation can be presented in the user interface;
and the second information processing module is used for responding to the operation of switching to the user interface and switching the content page to the user interface.
In the above-mentioned scheme, the first and second light sources,
the second information processing module is used for presenting second interaction prompt information in the content page, and the second interaction prompt information is used for prompting that the interaction content corresponding to the interaction operation can be presented in a special effect information template library interface;
and the second information processing module is used for responding to an instruction of switching to the special effect information template library interface and switching the content page to the special effect information template library interface.
In the above-mentioned scheme, the first and second light sources,
the second information processing module is used for presenting a sharing function item for sharing the special effect information in the user interface;
the second information processing module is used for responding to the triggering operation of the sharing function item aiming at the special effect information and sharing the special effect information to users in different audio service processing environments.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the audio information processing method of the preamble or realizing the audio information presentation method of the preamble when the executable instructions stored in the memory are run.
Embodiments of the present invention also provide a computer-readable storage medium storing executable instructions, which when executed by a processor implement a method for processing audio information according to a preamble or a method for presenting audio information according to a preamble.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention analyzes and processes the template information of the audio service processing environment to obtain first audio track data; storing the first audio track data into an audio information storage hash table, wherein the audio information storage hash table is used for storing audio information, and the first audio track data is stored in the audio information; responding to a dynamic modification instruction, and dynamically modifying the first audio track data to obtain second audio track data; responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information; extracting, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information; combining the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame; therefore, real-time modification and flexible control of the audio information can be realized, the audio information processing process responding to the audio service output instruction is simpler and more convenient, and the convenience of audio information processing is improved.
Drawings
FIG. 1 is a schematic diagram of an environment for processing audio information according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 3 is a block diagram of an architecture for organizing logic and data in accordance with an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an alternative audio information processing method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the processing of audio track data by dynamic modification instructions according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of an alternative audio information processing method according to an embodiment of the present invention;
fig. 7 is a schematic flow chart of an alternative audio information processing method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating animated special effects information in accordance with an embodiment of the present invention;
FIG. 9 is a diagram illustrating animation effect information according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) In response to: for indicating the condition or state on which the performed operation depends, when the condition or state on which the performed operation depends is satisfied, the performed operation or operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) Target video: various forms of video information available in the internet, such as video files, audio information, etc. presented in a client or smart device.
3) The client side comprises: the bearer in the terminal that implements the specific function, for example, the mobile client (APP), is the bearer for the specific function in the mobile terminal, for example, the function of performing live online (video push streaming) or the playing function of online video.
4) Audio information: including but not limited to: background music in long videos (movie work videos), music in short videos (videos in which the length of video uploaded by a user is less than 1 minute), audio (for example, mv or albums with fixed pictures), and audio of sound effects in moving pictures.
5) And (3) ECS: the Entity Component System is an architectural mode for organizing logic and data, wherein a Component (Component) carries data required by the operation of the System, the System is a logic module for processing the data carried on the Component, the Entity is a representation of an Entity, and one Entity uses a plurality of components to carry data.
6) Audio Process System: an audio processing system in a rendering engine based on an ECS implementation.
7) AudioInfoMap: the audio information stores a hash table for storing the structure of a plurality of AudioInfo.
8) AudioReader: a reader for reading audio frame data in an AudioInfo.
9) AudioInfo: i.e., audio information, represents an audio data source structure.
Fig. 1 is a schematic view of a usage scenario of an audio information processing method according to an embodiment of the present invention, referring to fig. 1, a terminal (including a terminal 10-1 and a terminal 10-2) is provided with corresponding clients capable of executing different functions, where the clients are terminals (including the terminal 10-1 and the terminal 10-2) that acquire different video information from corresponding servers 200 through different service processes via a network 300 to browse, the terminal is connected to the servers 200 through the network 300, the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless link, where the types of audio information acquired by the terminals (including the terminal 10-1 and the terminal 10-2) from the corresponding servers 200 through the network 300 are different, where the types of the audio information are: including but not limited to: long video (e.g., movie work video), short video (video uploaded by a user with a video length of less than 1 minute), audio (e.g., mv or album with fixed pictures), and audio information corresponding to sound effects in animation, for example: the terminals (including the terminal 10-1 and the terminal 10-2) can obtain the long video (i.e. the video carries the video information or the corresponding video link) from the corresponding server 200 through the network 300, and can also obtain the short video from the corresponding server 400 through the network 300 for browsing. Different types of videos may be stored in server 200 and server 400. In the application, the playing environments of different types of audio information are not distinguished any more. In the foregoing audio service processing environment, it is necessary to process and output audio information according to different service requirements and user usage requirements.
Taking short video as an example, the audio information processing method provided by the invention can be applied to short video playing, splicing or addressing playing processing can be performed on audio information of different data sources in the making and playing of the short video, so as to generate audio information in animation special effects meeting the requirements of users, for example, template information of an audio service processing environment can be analyzed, and first audio track data can be obtained; storing the first audio track data into an audio information storage hash table, wherein the audio information storage hash table is used for storing audio information, and the first audio track data is stored in the audio information; responding to a dynamic modification instruction, and dynamically modifying the first audio track data to obtain second audio track data; responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information; extracting, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information; and combining the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame.
And finally, presenting an audio frame matched with the corresponding audio service output instruction on a User Interface (UI), wherein the audio frame matched with the audio service output instruction obtained in the process can be called by other application programs (for example, recommending that a contact in a short video client uses the same animation special effect to generate a corresponding audio frame).
Because the demand for audio information processing is increasing, the audio information processing method provided by the embodiment of the application can be implemented by a cloud technology. The embodiments of the present invention may be implemented by combining a Cloud technology or a block chain network technology, where the Cloud technology (Cloud technology) refers to a hosting technology for unifying series resources such as hardware, software, and a network in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data, and may also be understood as a generic term of a network technology, an information technology, an integration technology, a management platform technology, an application technology, and the like applied based on a Cloud computing business model. Background services of the technical network system require a large amount of computing and storage resources, such as multimedia information websites, picture-type websites and more portal websites, and therefore cloud technology needs to be supported by cloud computing.
It should be noted that cloud computing is a computing mode, and distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can obtain computing power, storage space and information services as required. The network that provides the resources is called the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand. As a basic capability provider of cloud computing, a cloud computing resource pool platform, which is called an Infrastructure as a Service (IaaS) for short, is established, and multiple types of virtual resources are deployed in a resource pool and are used by external clients selectively. The cloud computing resource pool mainly comprises: a computing device (which may be a virtualized machine, including an operating system), a storage device, and a network device.
As will be described in detail below, the electronic device according to the embodiment of the present invention may be implemented in various forms, such as a dedicated terminal with an audio information processing function, for example, a gateway, or a server with an audio information processing function, for example, the server 200 in fig. 1. Fig. 2 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present invention, and it is understood that fig. 2 only shows an exemplary structure of a server, and a part of or the entire structure shown in fig. 2 may be implemented as needed.
The electronic equipment provided by the embodiment of the invention comprises: at least one processor 201, memory 202, user interface 203, and at least one network interface 204. The various components in the electronic device 20 are coupled together by a bus system 205. It will be appreciated that the bus system 205 is used to enable communications among the components of the connection. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 205 in FIG. 2.
The user interface 203 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 202 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The memory 202 in embodiments of the present invention is capable of storing data to support operation of the terminal (e.g., 10-1). Examples of such data include: any computer program, such as an operating system and application programs, for operating on a terminal (e.g., 10-1). The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program may include various application programs.
In some embodiments, the audio information processing apparatus provided in the embodiments of the present invention may be implemented by a combination of hardware and software, and by way of example, the audio information processing apparatus provided in the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the audio information processing method provided in the embodiments of the present invention. For example, a processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
As an example of the audio information processing apparatus provided by the embodiment of the present invention implemented by combining software and hardware, the audio information processing apparatus provided by the embodiment of the present invention may be directly embodied as a combination of software modules executed by the processor 201, where the software modules may be located in a storage medium, the storage medium is located in the memory 202, and the processor 201 reads executable instructions included in the software modules in the memory 202, and completes the audio information processing method provided by the embodiment of the present invention in combination with necessary hardware (for example, including the processor 201 and other components connected to the bus 205).
By way of example, the Processor 201 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor or the like.
As an example of the audio information processing apparatus provided by the embodiment of the present invention being implemented by hardware, the apparatus provided by the embodiment of the present invention may be implemented by directly using the processor 201 in the form of a hardware decoding processor, for example, by being executed by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components, to implement the audio information processing method provided by the embodiment of the present invention.
The memory 202 in embodiments of the present invention is used to store various types of data to support the operation of the electronic device 20. Examples of such data include: any executable instructions for operating on the electronic device 20, such as executable instructions, may be included in the executable instructions, as may programs implementing the slave audio information processing methods of embodiments of the present invention.
In other embodiments, the audio information processing apparatus provided by the embodiment of the present invention may be implemented by software, and fig. 2 shows the audio information processing apparatus 2020 stored in the memory 202, which may be software in the form of programs, plug-ins, and the like, and includes a series of modules, and as an example of the programs stored in the memory 202, the audio information processing apparatus 2020 may include the following software modules: a first information transmission module 2081 and a first information processing module 2082. When the software modules in the audio information processing apparatus 2020 are read into the RAM by the processor 201 and executed, the functions of the software modules in the audio information processing apparatus 2020 are described as follows:
the first information transmission module 2081, configured to analyze template information of an audio service processing environment to obtain first audio track data;
the first information processing module 2082 is configured to store the first audio track data in an audio information storage hash table, where the audio information storage hash table is used to store audio information, and the audio information is used to store the first audio track data;
the first information processing module 2082, configured to respond to a dynamic modification instruction, and dynamically modify the first audio track data to obtain second audio track data;
the first information processing module 2082, configured to respond to an audio service output instruction, obtain audio information from the audio information storage hash table, and configure a corresponding audio frame data reader for the audio information;
the first information processing module 2082 is configured to extract, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information;
the first information processing module 2082 is configured to combine the first audio frames corresponding to different target timestamps to obtain and output a second audio frame, so as to implement a response to the audio service output instruction through the second audio frame.
In other embodiments, the audio information presentation apparatus provided in the embodiments of the present invention may also be implemented in software, and the audio information presentation apparatus 2021 in the memory 202 may be software in the form of programs, plug-ins, and the like, and includes a series of modules, as examples of the programs stored in the memory 202, the audio information presentation apparatus 2021 may include the following software modules: a second information transmission module 2083, and a second information processing module 2084. When the software modules in the audio information presentation apparatus 2021 are read into the RAM by the processor 201 and executed, the functions of the software modules in the audio information presentation apparatus 2021 will be described as follows:
the second information transmission module 2083 is configured to display a user interface, and present a task control function item in the user interface, where the task control function item is used to dynamically modify the first audio track data through a dynamic modification instruction triggered by a condition.
And the second information processing module 2084 is configured to, in response to a trigger operation for the task control function item, obtain animation special effect information including an audio service output instruction.
The second information processing module 2084 is configured to obtain a second audio frame corresponding to the audio service output instruction.
The second information processing module 2084 is configured to present the animation special effect information and the second audio frame in the user interface.
According to the electronic device shown in fig. 2, in one aspect of the present application, the present application also provides a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes different embodiments and combinations of embodiments provided in various alternative implementations of the audio information processing method or the audio information presenting method.
Before describing the audio information processing method provided by the embodiment of the present invention with reference to the electronic device 20 shown in fig. 2, first, an architecture (Entity Component System) for organizing logic and data of a usage environment of the audio information processing method provided by the present application is introduced, and referring to fig. 3, fig. 3 is a schematic structural diagram of the architecture for organizing logic and data in the embodiment of the present invention, wherein the architecture (Entity Component System) for organizing logic and data includes: the System comprises an Audio processing System and a Video processing System, wherein the Audio processing System interacts with an Entity (Entity) containing two types of components, namely Time and Audio, and the Video processing System interacts with the Entity (Entity) containing the two types of components, namely Time and Video. Due to the complexity of the services handled by the architecture that organizes logic and data, the remaining entities that do not contain satisfactory components are not processed (e.g., entity 6 and Entity 7 of FIG. 3, which do not contain both Time and Audio, or both Time and Video, are not processed by the Audio processing System and the Video processing System). The architecture shown in fig. 3 also includes two systems: the system comprises a Script system (Script) and an Event Trigger system (Event Trigger), wherein the Script system and the Event Trigger system can modify the content of the Audio component in real time, and can process the latest Audio data in real time with an Audio processing system. Taking the usage environment of game animation as an example, when processing audio information using the architectural mode of organizing logic and data shown in fig. 3, entities can fill in different game effect transactions in a game program. The component may represent the data in the game. When the architecture for organizing logic and data shown in fig. 3 is used, the audioprocess system can continuously read Components (Components are Components, and data-only modules are used to store data required by an Entity) included in the Entity of interest, and then process the Components. In the using process, the conversion of the motion state or the state of the audio data is realized by updating the Component data in real time, and the audio processing system continuously reads the Component data to realize the real-time updating of the audio. .
Referring to fig. 4, fig. 4 is an optional flowchart of the audio information processing method according to the embodiment of the present invention, and it can be understood that the steps shown in fig. 4 may be executed by various terminals operating the audio information processing apparatus, such as a dedicated terminal with an audio information processing function, for example, a terminal operating a short video client (or a triggered WeChat applet for producing animation special effect information). The following is a description of the steps shown in fig. 4.
Step 401: the audio information processing device analyzes the template information of the audio service processing environment to obtain first audio track data.
In some embodiments of the present invention, the template information of the audio service processing environment is analyzed to obtain the first audio track data, which may be implemented in the following manner:
analyzing the template information of the audio service processing environment to acquire the time sequence information of the template information; analyzing the audio parameters corresponding to the template information according to the time sequence information of the template information, and acquiring the audio type and the audio track information parameters corresponding to the template information; and extracting the template information based on the audio type and the audio track information parameter corresponding to the template information to obtain first audio track data corresponding to the template information. Since the audio type is related to the audio service processing environment and may include a single audio, an audio matched with the video information, and an audio matched with the animation resource, it is necessary to perform classification processing according to the audio type and acquire the first audio track data corresponding to the template information based on the corresponding audio track information parameter.
For example, the audio information is used as background music in the long video, and template information in the video data may be obtained first; then, the corresponding playing duration parameter and the corresponding audio track information parameter can be obtained by analyzing the audio head decoded data AACDecoderSpecificInfo and the audio data configuration information AudioSpecificConfig in the template information. The audio data configuration information AudioSpecificConfig is used to generate ADST (including the sampling rate, the number of channels, and the frame length data in the audio data). And acquiring other audio packets in the video data based on the audio track information, analyzing original audio data, and finally packaging the AAC ES stream into an ADTS format through an audio data header AAC decoder, wherein a header file ADTSheader of 7 bytes is added in front of the AAC ES stream to extract audio track data.
In some embodiments of the present invention, due to the complex composition of the audio track data, when the animation special effect information (for example, special effect plug-in short video) corresponds to audio, in order to implement independent control on different audio information in the audio track data, different types of audio information may be extracted, for example: when the audio type is single audio, extracting audio data from an audio resource component of the template information to construct first audio information; when the audio type is the audio matched with the video information, extracting audio data from the multimedia information component of the template information to construct second audio information; when the audio type is the audio matched with the animation resource, extracting audio data from the animation resource component of the template information to construct third audio information; and combining the first audio information, the second audio information and the third audio information to obtain first audio track data corresponding to the template information. By combining different types of audio information, the audio performance of the produced animation special effect information is richer, meanwhile, the control of different types of audio information is more convenient, the complexity of audio information processing is reduced, the method is suitable for more short video users, and the use range of audio information processing is effectively expanded.
Step 402: and the audio information processing device saves the first audio track data into an audio information storage hash table.
The audio information storage hash table is used for storing audio information, and the first audio track data is stored in the audio information.
Step 403: and the audio information processing device responds to the dynamic modification instruction and dynamically modifies the first audio track data to obtain second audio track data.
In some embodiments of the present invention, dynamically modifying the first audio track data in response to a dynamic modification instruction to obtain second audio track data includes:
receiving the dynamic modification instruction, wherein the dynamic modification instruction comprises at least one of:
adjusting the playing initial position, pausing playing, continuing playing, addressing playing, triggering conditions and playing scripts of the audio track data; according to the type of the dynamic modification instruction, dynamically modifying the first audio track data in the audio information storage hash table to obtain second audio track data; and responding to a dynamic modification instruction, and modifying the audio information storage hash table to obtain an audio information storage hash table corresponding to the second audio track data. Referring to fig. 5, fig. 5 is a schematic diagram illustrating processing of audio track data by a dynamic modification instruction according to an embodiment of the present invention, specifically, basic control of audio by the dynamic modification instruction includes two main categories, where the first category is an operation of an audio track, and includes: playing, stopping, pausing, resuming, addressing (i.e. seek, for example, changing the starting playing position, for example, a 60 second audio is played from the 10 th second), intercepting (for example, a 60 second audio is played only for 6 seconds, etc.), delaying for a specific time (for example, delaying for 5 seconds), adding an audio into the original file path of the existing playing or modifying an audio; the second type is to adjust the playing speed and volume of the audio, and the dynamic modification instruction further includes: conditional triggers or script programs are supported to control the audio track data, such as: when a plurality of audios can be played simultaneously through the script program, the respective playing state of each audio can be independently modified without mutual influence. When the dynamic modification instruction is a condition touch control, an audio playing can be dynamically added after a specific condition is generated. For example: after the face is successfully recognized in the short video, certain sound effect information can be played, or background music needs to be paused and switched into another background music.
When the dynamic modification instruction is used for adjusting the playing rate of an audio frame, adjusting first audio track data in the audio information through the audio frame data reader to obtain the audio frame playing rate matched with the dynamic modification instruction; and when the dynamic modification instruction is used for adjusting the volume of an audio frame, adjusting the first audio track data in the audio information through the audio frame data reader to obtain the volume of the audio frame matched with the dynamic modification instruction.
In some embodiments of the present invention, the script program may dynamically modify the state of each audio in the process of template application, including operations such as removing an audio, pausing/resuming playing/increasing or decreasing the start playing position, and may dynamically control the playing of the audio information, so that the playing of the audio information is more flexible.
Step 404: the audio information processing device responds to an audio service output instruction, obtains audio information from the audio information storage hash table, and configures a corresponding audio frame data reader for the audio information.
In some embodiments of the present invention, in the process of configuring a corresponding audio frame data reader for the audio information, when the audio information is obtained from the audio information storage hash table in response to an audio service output instruction for the first time, configuring a corresponding first audio frame data reader for the audio information; monitoring the continuous state of the audio information, keeping the continuous state of the first audio frame data reader when the audio information exists continuously, and updating data information in the audio information; deleting the first audio frame data reader when the audio information is removed, and configuring a second audio frame data reader according to a change in the audio information. In this process, since the latest audio information is synchronized for each frame of audio information, if an audio frame data reader is frequently created, the processing resources of the CPU are wasted. By monitoring the audio data, responding to an audio service output instruction for the first time, when the audio information is acquired from the audio information storage hash table, a corresponding audio frame data reader can be created, if the audio data exists all the time in the subsequent processing process, the corresponding audio frame data reader can keep a continuous state, corresponding resources are not released, and only the member attribute information stored in the hash table needs to be updated. If an audio message is removed, the audio frame data reader list held by the AudioOutput may also delete the obsolete audio frame data readers simultaneously, and in response to the newly added audio data, an audio frame data reader may be newly created for association.
Step 405: the audio information processing apparatus extracts, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target time stamp stored in the audio information.
Step 406: and the audio information processing device performs combination processing on the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame.
With continuing reference to fig. 6, fig. 6 is an alternative flowchart of the audio information processing method according to the embodiment of the present invention, and it can be understood that the steps shown in fig. 6 can be executed by various terminals operating the audio information processing apparatus, such as a dedicated terminal with an audio information processing function, for example, a terminal operating a short video client (or a triggered WeChat applet for producing animation special effect information). The following is a description of the steps shown in fig. 6.
Step 601: and when the dynamic modification instruction is addressing playing or the playing starting position is adjusted to the starting position, determining a target time parameter matched with the dynamic modification instruction, and storing the target time parameter in an audio information storage hash table.
Step 602: and when the second audio frame is output, comparing the target time parameter with the target timestamp, and determining a timestamp comparison result.
Step 603: and triggering an addressing and playing process based on the timestamp comparison result so as to keep the target time parameter and the target timestamp synchronous when the second audio frame is output.
In some embodiments of the present invention, after the audio information starts to be played, the audio frame data reader may automatically set the audio track of the corresponding audio information to the playing time of the next audio frame without synchronizing the current audio time except for the addressing playing or the repeated playing when the audio is played to the end in a loop. When addressing playing is carried out or playing is carried out from the initial position, a target targetTime can be set to the audio information, the audio information is written into a corresponding hash table, and according to the targetTime, the playing time stamp of the current audio frame data reader is compared, and whether addressing processing is required or not is calculated.
Step 604: and when the template information of the audio service processing environment is analyzed and processed and the first audio track data is not obtained, outputting a null data frame matched with the target timestamp as the second audio frame.
When one template does not contain audio, the AudioOutput executes a return null data frame (all 0 data) in response to an audio service output instruction.
With reference to fig. 7, fig. 7 is an optional flowchart of the audio information processing method according to an embodiment of the present invention, and it may be understood that the steps shown in fig. 7 may be executed by various terminals that operate an audio information processing apparatus, for example, a terminal that operates a short video client, and specifically include the following steps:
step 701: and analyzing the template to construct an ECS system.
Fig. 8 is a schematic diagram of animation special effect information in an embodiment of the present invention, and the method shown in fig. 7 is used to process the animation special effect information in fig. 8, specifically, a content page including template information of the audio service processing environment may be presented in response to a viewing operation for the task control function item, and at least one interactive function item is presented in the content page, where the interactive function item is used to implement interaction with the audio service processing environment; receiving an interactive operation, triggered based on the interactive function item, for the audio service processing environment, to execute a corresponding interactive instruction, taking animation special effect information shown in fig. 8 as an example, by triggering a task control function item 801 in a user interface 800, the first audio track data that has been stored may be dynamically modified, for example, a play start position of the first audio track data may be adjusted through a dynamic modification instruction triggered by a condition, and further, when the animation special effect information is presented in the user interface, the animation special effect information and the second audio frame may be presented in the user interface in response to an audio service output instruction.
Step 702: and (5) updating the process management by the condition triggering script program.
Step 703: the Component (service related) containing the audio is acquired.
Step 704: and storing the first audio track data into an audio information storage hash table.
Step 705: the first audio track data is dynamically modified in response to the dynamic modification instruction.
In some embodiments of the present invention, the first audio track data may be complete audio track data of a song, the task control function item 801 in the user interface is triggered, the stored first audio track data is dynamically modified through the dynamic modification instruction played by addressing, different audio frames in the song are intercepted and combined to form a second audio frame, and the lyric special effect information may be presented in the user interface 800 and the second audio frame may be output in response to the audio service output instruction, so that the audio information processing process in response to the audio service output instruction is simpler and more convenient, the convenience of audio information processing is improved, and the user obtains more convenient use experience.
Step 706: configuring the corresponding audio frame data reader for the audio information.
Step 707: and monitoring the continuous state of the audio information.
Step 708: and when the dynamic modification instruction is addressing playing or the playing starting position is adjusted to the starting position, determining a target time parameter matched with the dynamic modification instruction, and storing the target time parameter in an audio information storage hash table.
Step 709: and combining the first audio frames corresponding to different target timestamps to obtain a second audio frame.
Step 710: and outputting the second audio frame.
Referring to fig. 9, fig. 9 is a schematic diagram of animation special effect information according to an embodiment of the present invention, in a content page 900, at least one interactive function item 901 may be presented in the content page 900 in response to a viewing operation for the task control function item, where the interactive function item 901 is used to implement interaction with the audio service processing environment; and receiving interactive operation aiming at the audio service processing environment triggered based on the interactive function item so as to execute a corresponding interactive instruction. Further, first interaction prompt information can be presented in the content page, and the first interaction prompt information is used for prompting that the interaction content corresponding to the interaction operation can be presented in the user interface; and responding to the operation of switching to the user interface, and switching a content page to the user interface. When a user triggers an applet for making special effect information of animation, and first audio track data presented in a view interface do not meet use requirements, the user can reconfigure the first audio track data meeting the requirements through a content page 900, and the user confirms that the information of the configured first audio track data can be presented in the content page through first interaction prompt information so as to be used by the applet for making the special effect information, so that the selection types of the user are enriched.
In the content page 900 shown in fig. 9, second interaction prompt information may also be presented in the content page, where the second interaction prompt information is used to prompt that the interaction content corresponding to the interaction operation can be presented in the special effect information template library interface; and responding to an instruction of switching to the special effect information template library interface, and switching the content page to the special effect information template library interface. Specifically, as shown in fig. 9, since the user's needs are various, the user switches the content page to the interface of the special effect information template library through the second interaction prompt information, and can select special effect information meeting the user's needs through the special effect information template library, so that the user obtains richer use experience.
In some embodiments of the present invention, a sharing function item for sharing the special effect information may be further presented in the user interface; responding to the triggering operation of the sharing function item aiming at the special effect information, and sharing the special effect information to users in different audio service processing environments, therefore, when the audio information processing method provided by the application is used in the short video client, the users can suture the special effect information to different users through the sharing function item, and respond to an audio service output instruction to output a second audio frame meeting service requirements.
The beneficial technical effects are as follows:
the embodiment of the invention obtains first audio track data by analyzing and processing the template information of the audio service processing environment; storing the first audio track data into an audio information storage hash table, wherein the audio information storage hash table is used for storing audio information, and the first audio track data is stored in the audio information; responding to a dynamic modification instruction, and dynamically modifying the first audio track data to obtain second audio track data; responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information; extracting, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information; combining the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame; therefore, real-time modification and flexible control of the audio information can be realized, the audio information processing process responding to the audio service output instruction is simpler and more convenient, and the convenience of audio information processing is improved.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (15)

1. A method for processing audio information, the method comprising:
analyzing the template information of the audio service processing environment to obtain first audio track data;
storing the first audio track data into an audio information storage hash table, wherein the audio information storage hash table is used for storing audio information, and the first audio track data is stored in the audio information;
responding to a dynamic modification instruction, and dynamically modifying the first audio track data to obtain second audio track data;
responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information;
extracting, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information;
and combining the first audio frames corresponding to different target timestamps to obtain and output a second audio frame so as to respond to the audio service output instruction through the second audio frame.
2. The method of claim 1, wherein parsing the template information of the audio service processing environment to obtain the first audio track data comprises:
analyzing the template information of the audio service processing environment to acquire the time sequence information of the template information;
analyzing the audio parameters corresponding to the template information according to the time sequence information of the template information, and acquiring the audio type and the audio track information parameters corresponding to the template information;
and extracting the template information based on the audio type and the audio track information parameter corresponding to the template information to obtain first audio track data corresponding to the template information.
3. The method according to claim 2, wherein the extracting the template information based on the audio type and the track information parameter corresponding to the template information to obtain the first audio track data corresponding to the template information comprises:
when the audio type is single audio, extracting audio data from an audio resource component of the template information to construct first audio information;
when the audio type is the audio matched with the video information, extracting audio data from the multimedia information component of the template information to construct second audio information;
when the audio type is the audio matched with the animation resource, extracting audio data from the animation resource component of the template information to construct third audio information;
and combining the first audio information, the second audio information and the third audio information to obtain first audio track data corresponding to the template information.
4. The method of claim 1, wherein said dynamically modifying said first audio track data in response to a dynamic modification instruction to obtain second audio track data comprises:
receiving the dynamic modification instruction, wherein the dynamic modification instruction comprises at least one of:
adjusting the play initial position, pausing play, continuing play, addressing play, triggering conditions and playing scripts of the audio track data;
according to the type of the dynamic modification instruction, dynamically modifying the first audio track data in the audio information storage hash table to obtain second audio track data;
and responding to a dynamic modification instruction, and modifying the audio information storage hash table to obtain an audio information storage hash table corresponding to the second audio track data.
5. The method of claim 1, wherein the retrieving audio information from the audio information storage hash table in response to an audio service output command, and configuring a corresponding audio frame data reader for the audio information comprises:
when responding to an audio service output instruction for the first time and acquiring audio information from the audio information storage hash table, configuring a corresponding first audio frame data reader for the audio information;
monitoring the continuous state of the audio information, keeping the continuous state of the first audio frame data reader when the audio information exists continuously, and updating data information in the audio information;
and when the audio information is removed and new audio information is added, deleting the first audio frame data reader, and configuring a second audio frame data reader according to the change of the audio information.
6. The method of claim 1, further comprising:
when the dynamic modification instruction is addressing playing or the playing starting position is adjusted to the starting position, determining a target time parameter matched with the dynamic modification instruction, and storing the target time parameter in an audio information storage hash table;
when the second audio frame is output, comparing the target time parameter with the target timestamp, and determining a timestamp comparison result;
and triggering an addressing playing process based on the timestamp comparison result so as to keep the target time parameter and the target timestamp synchronous when the second audio frame is output.
7. The method of claim 1, further comprising:
and when the template information of the audio service processing environment is analyzed and processed and the first audio track data is not obtained, outputting a null data frame matched with the target timestamp as the second audio frame.
8. The method of claim 1, further comprising:
when the dynamic modification instruction is used for adjusting the playing rate of an audio frame, adjusting first audio track data in the audio information through the audio frame data reader to obtain the audio frame playing rate matched with the dynamic modification instruction;
and when the dynamic modification instruction is used for adjusting the volume of an audio frame, adjusting the first audio track data in the audio information through the audio frame data reader to obtain the volume of the audio frame matched with the dynamic modification instruction.
9. A method of audio information presentation, the method comprising:
displaying a user interface, and presenting a task control function item in the user interface, wherein the task control function item is used for dynamically modifying first audio track data through a dynamic modification instruction triggered by a condition;
responding to the trigger operation aiming at the task control function item, and acquiring animation special effect information containing an audio service output instruction;
acquiring a second audio frame corresponding to the audio service output instruction;
presenting the animated special effects information and the second audio frame in the user interface.
10. The method of claim 9, further comprising:
the task control function item is also used for dynamically modifying the first audio track data through a dynamic modification instruction played by addressing;
responding to the trigger operation aiming at the task control function item, and acquiring lyric special effect information containing an audio service output instruction;
acquiring a second audio frame corresponding to the audio service output instruction;
presenting the lyrics special effect information and the second audio frame in the user interface.
11. The method of claim 9, further comprising:
in response to the viewing operation aiming at the task control function item, presenting a content page comprising the template information of the audio service processing environment, and presenting at least one interactive function item in the content page, wherein the interactive function item is used for realizing the interaction with the audio service processing environment;
and receiving interactive operation aiming at the audio service processing environment triggered based on the interactive function item so as to execute a corresponding interactive instruction.
12. An audio information processing apparatus, characterized in that the apparatus comprises:
the first information transmission module is used for analyzing and processing the template information of the audio service processing environment to obtain first audio track data;
a first information processing module, configured to store the first audio track data in an audio information storage hash table, where the audio information storage hash table is used to store audio information, and the audio information is used to store the first audio track data;
the first information processing module is used for responding to a dynamic modification instruction and dynamically modifying the first audio track data to obtain second audio track data;
the first information processing module is used for responding to an audio service output instruction, acquiring audio information from the audio information storage hash table, and configuring a corresponding audio frame data reader for the audio information;
the first information processing module is configured to extract, by the audio frame data reader, a first audio frame in the second audio track data corresponding to a target timestamp stored in the audio information;
the first information processing module is configured to perform combination processing on first audio frames corresponding to different target timestamps to obtain and output a second audio frame, so as to implement response on the audio service output instruction through the second audio frame.
13. An audio information presentation apparatus, characterized in that the apparatus comprises:
the second information transmission module is used for displaying a user interface and presenting a task control function item in the user interface, wherein the task control function item is used for dynamically modifying the first audio track data through a dynamic modification instruction triggered by a condition;
the second information processing module is used for responding to the triggering operation aiming at the task control function item and acquiring animation special effect information containing an audio service output instruction;
the second information processing module is used for acquiring a second audio frame corresponding to the audio service output instruction;
the second information processing module is configured to present the animation special effect information and the second audio frame in the user interface.
14. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the audio information processing method of any one of claims 1 to 8 or the audio information presentation method of any one of claims 9 to 11 when executing the executable instructions stored by the memory.
15. A computer-readable storage medium storing executable instructions, characterized in that the executable instructions, when executed by a processor, implement the audio information processing method of any one of claims 1 to 8, or implement the audio information presentation method of any one of claims 9 to 11.
CN202110513496.2A 2021-05-11 2021-05-11 Audio information processing method, audio information presenting method and device Pending CN115329122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110513496.2A CN115329122A (en) 2021-05-11 2021-05-11 Audio information processing method, audio information presenting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110513496.2A CN115329122A (en) 2021-05-11 2021-05-11 Audio information processing method, audio information presenting method and device

Publications (1)

Publication Number Publication Date
CN115329122A true CN115329122A (en) 2022-11-11

Family

ID=83912162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110513496.2A Pending CN115329122A (en) 2021-05-11 2021-05-11 Audio information processing method, audio information presenting method and device

Country Status (1)

Country Link
CN (1) CN115329122A (en)

Similar Documents

Publication Publication Date Title
US11417341B2 (en) Method and system for processing comment information
CN106060655B (en) Video processing method, server and terminal
US11887630B2 (en) Multimedia data processing method, multimedia data generation method, and related device
RU2580022C2 (en) Displaying text in natural language
EP3357253B1 (en) Gapless video looping
EP3055761B1 (en) Framework for screen content sharing system with generalized screen descriptions
CN111064972A (en) Live video control method based on IPV9
CN111064973A (en) Live broadcast system based on IPV9
US11490173B2 (en) Switch of audio and video
US20220174346A1 (en) Video playing method and apparatus
CN110149518B (en) Method, system, device, equipment and storage medium for processing media data
US20180376209A1 (en) Video player framework for a media distribution and management platform
CN111669645B (en) Video playing method and device, electronic equipment and storage medium
CN105812845B (en) A kind of media resource method for pushing, system and the media player based on android system
CN111131848A (en) Video live broadcast data processing method, client and server
KR100340169B1 (en) System and Method for Web Cataloging Dynamic Multimedia Using Java
WO2021103366A1 (en) Bullet screen processing method and system based on wechat mini-program
CN115767131A (en) Cloud director method, device, equipment and computer storage medium
CN111818383A (en) Video data generation method, system, device, electronic equipment and storage medium
US9721321B1 (en) Automated interactive dynamic audio/visual performance with integrated data assembly system and methods
US11064244B2 (en) Synchronizing text-to-audio with interactive videos in the video framework
CN112770168A (en) Video playing method and related device and equipment
CN112911332A (en) Method, apparatus, device and storage medium for clipping video from live video stream
CN111541905A (en) Live broadcast method and device, computer equipment and storage medium
CN115329122A (en) Audio information processing method, audio information presenting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40075363

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination