CN113031906A - Audio playing method, device, equipment and storage medium in live broadcast - Google Patents

Audio playing method, device, equipment and storage medium in live broadcast Download PDF

Info

Publication number
CN113031906A
CN113031906A CN202110441630.2A CN202110441630A CN113031906A CN 113031906 A CN113031906 A CN 113031906A CN 202110441630 A CN202110441630 A CN 202110441630A CN 113031906 A CN113031906 A CN 113031906A
Authority
CN
China
Prior art keywords
target
audio
playing
live broadcast
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110441630.2A
Other languages
Chinese (zh)
Inventor
朱明媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110441630.2A priority Critical patent/CN113031906A/en
Publication of CN113031906A publication Critical patent/CN113031906A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device and equipment for playing audio in live broadcast and a computer readable storage medium; the method comprises the following steps: playing live broadcast content of a target live broadcast room through a live broadcast interface; in the process of playing the live broadcast content, acquiring the quantity information of objects for executing target interaction operation in the target live broadcast room; and playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects. By the method and the device, image display resources can be saved, and perception of quantity information of the objects for executing the target interactive operation is enhanced.

Description

Audio playing method, device, equipment and storage medium in live broadcast
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for playing an audio in a live broadcast.
Background
With the development of the internet and the popularization of networks, live broadcasting is a very popular way for entertainment and learning. In the related art, in the live broadcast process, usually, in a live broadcast interface, the number of online viewers is displayed digitally, or bullet screen information sent by the viewers is displayed, and the number of viewers sending the bullet screen information is represented by the number of the bullet screen information.
Disclosure of Invention
The embodiment of the application provides an audio playing method and device in live broadcasting and a computer readable storage medium, which can save image display resources and enhance perception of quantity information of objects for executing target interactive operation.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an audio playing method in live broadcasting, which comprises the following steps:
playing live broadcast content of a target live broadcast room through a live broadcast interface;
in the process of playing the live broadcast content, acquiring the quantity information of objects for executing target interaction operation in the target live broadcast room;
and playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects.
The embodiment of the application provides an audio playback device in live broadcast, includes:
the first playing module is used for playing live broadcast content of a target live broadcast room through a live broadcast interface;
the acquisition module is used for acquiring the quantity information of objects for executing target interaction operation in the target live broadcast room in the process of playing the live broadcast content;
and the second playing module is used for playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects.
In the above scheme, the second playing module is further configured to obtain a plurality of sound effects corresponding to different scenes;
determining a target scene matched with the quantity information according to the quantity information of the objects;
processing the audio data in the live broadcast content by adopting the sound effect corresponding to the target scene to obtain the audio data carrying the scene information of the target scene;
and playing the audio data carrying the scene information of the target scene.
In the above scheme, the second playing module is further configured to obtain an audio material corresponding to the target interaction operation;
processing the audio material corresponding to the target interactive operation by adopting the sound effect matched with the quantity information of the objects;
and superposing the processed audio material with the audio data in the live broadcast content, and playing the audio data superposed with the audio material.
In the above scheme, the target interaction operation includes a viewing operation and a plurality of audio interaction operations, and each audio interaction operation corresponds to an audio material;
the second playing module is further configured to obtain audio materials corresponding to the audio interaction operations;
respectively adopting sound effects matched with the quantity information of the objects for executing the audio interaction operation to process corresponding audio materials to obtain a plurality of target audio materials;
superposing a plurality of target audio materials and audio data in the live broadcast content, and
and playing the audio data superposed with the target audio materials by adopting a sound effect matched with the quantity information of the objects for executing the watching operation.
In the above scheme, the obtaining module is further configured to present a plurality of audio selection items in a live interface, where each audio selection item corresponds to at least one audio interaction operation;
and responding to the selection operation of a target audio selection item in the plurality of audio selection items, and taking at least one audio interaction operation corresponding to the target audio selection item as a target interaction operation.
In the above scheme, the second playing module is further configured to present, in a live interface, an interactive function item for triggering an audio interaction operation when the target interaction operation is an audio interaction operation for playing a target audio material;
and responding to the triggering operation aiming at the interactive function item, and playing the target audio material.
In the above scheme, the second playing module is further configured to obtain an operation type corresponding to the trigger operation, and determine a playing duration matched with the operation type;
and playing the target audio material according to the playing duration.
In the above scheme, the obtaining module is further configured to obtain an object information list having a social relationship with a current user object;
determining the quantity information of objects having social relations with the current object in the objects for executing the target interaction operation in the target live broadcast room based on the object information list;
and the second playing module is also used for playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects having social relations with the current object.
In the above scheme, the second playing module is further configured to receive playing indication information for indicating that the target audio material is played with a target volume when the target interaction operation is an audio interaction operation for playing the target audio material;
the target volume corresponds to an object distance, wherein the object distance is a distance between a geographic position of an object for executing the audio interaction operation and a geographic position of a current user object;
and playing the target audio material by adopting the target volume.
In the above scheme, the second playing module is further configured to, when an object newly added to the target live broadcast room exists, obtain identity information of the object;
and playing the incoming audio data by adopting a sound effect matched with the identity information.
In the above scheme, the obtaining module is further configured to obtain a first quantity order of objects performing the target interaction operation in the target live broadcast room when the quantity information is the quantity order of the objects performing the target interaction operation;
and the second playing module is also used for adjusting the sound effect of the audio data to the sound effect matched with the second quantity magnitude by the sound effect matched with the first quantity magnitude when the quantity magnitude is switched from the first quantity magnitude to the second quantity magnitude.
In the above scheme, the second playing module is further configured to acquire a target scene corresponding to the sound effect;
and playing the animation special effect matched with the target scene in the process of playing the audio data.
In the above scheme, the second playing module is further configured to determine, according to the live content, a plurality of candidate sound effects associated with the live content;
and determining the sound effect matched with the quantity information of the object based on the corresponding relation between the quantity information and the candidate sound effect.
In the above scheme, the second playing module is further configured to, when a target live event is received, obtain an audio material associated with the target live event;
and playing audio materials related to the target live event by adopting a sound effect matched with the quantity information of the objects.
An embodiment of the present application provides a computer device, including:
a memory for storing executable instructions;
and the processor is used for realizing the audio playing method in the live broadcast provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the audio playing method in live broadcasting provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
by applying the embodiment, the quantity information of the objects for executing the target interactive operation in the target live broadcast room is acquired in the process of playing the live broadcast content; playing audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects; therefore, through different sound effects, the user can perceive the quantity information of the objects for executing the target interactive operation in the target live broadcast room, and compared with the quantity information of the objects for executing the target interactive operation in the target live broadcast room embodied in an image display mode, the image display resources can be saved, and the perception of the quantity information of the objects for executing the target interactive operation is enhanced.
Drawings
FIG. 1 is a diagram of a live interface provided by the related art;
fig. 2 is a schematic diagram of an alternative architecture of a live audio playing system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device 500 provided in an embodiment of the present application;
fig. 4 is an alternative flowchart of a live audio playing method according to an embodiment of the present application;
FIG. 5 is a schematic view of a live interface provided in an embodiment of the present application;
FIG. 6 is a schematic view of a live interface provided in an embodiment of the present application;
FIG. 7 is a schematic view of a setup interface provided by an embodiment of the present application;
FIG. 8 is a schematic view of a live interface provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an audio data playing system in live broadcast according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Live broadcast, which is to make and release information synchronously with the occurrence and development process of events on site, and has an information network release mode of bidirectional circulation process.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The quantity magnitude is used for grading different quantities, such as hundred thousand magnitude, million magnitude and the like.
4) And the sound effect is used for carrying out sound effect processing similar to the general filter on the audio data.
In the correlation technique, for creating the atmosphere sense of the live broadcast room, the user can perceive other users of the live broadcast room, the bullet screen information is usually displayed in the live broadcast interface of the live broadcast room, the number of the users is reflected by the number of the displayed bullet screen information, and if the number of the bullet screens is more, the number of people participating in interaction in the live broadcast room is more. For example, fig. 1 is a schematic view of a live broadcast interface provided in the related art, referring to fig. 1, in order to create a feeling of warmth, one third of the area in the live broadcast interface is used for displaying barrage information and commodity information, wherein the barrage information and the commodity information can appear simultaneously and the commodity information is preferentially displayed.
The applicant finds that, in the related art, it is only possible to display more bullet screen information in a unit area by reducing the sub-number of the bullet screen information to show that the number of users is large, which causes the efficiency of acquiring the bullet screen information by the users to be low and waste of image display resources.
Based on this, the embodiment of the application provides an audio playing method, an audio playing device and a computer-readable storage medium in live broadcasting, so that the picking mode and the sending mode of a virtual resource package can be enriched, and the human-computer interaction efficiency is improved.
Referring to fig. 2, fig. 2 is an alternative architecture diagram of an audio playing system in live broadcasting provided in this embodiment of the present application, in order to support an exemplary application, terminals (exemplary terminal 400-1 and terminal 400-2 are shown) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two networks. In actual implementation, a client is arranged on a terminal, such as a live client, and a main broadcast can be live through the client; the viewer may watch the live broadcast through the client. Here, the terminal may be located at the anchor side or at the viewer side.
And the terminal is used for playing the live broadcast content of the target live broadcast room through the live broadcast interface.
In actual implementation, a terminal of a main broadcasting side collects live broadcast contents (including image data and audio data), plays the collected live broadcast contents through a live broadcast interface, and sends the collected live broadcast contents to a server; the server 200 distributes the received live broadcast content to the audience side terminals of the target live broadcast room, and the audience side terminals play the received live broadcast content of the target live broadcast room through the live broadcast interface.
The server 200 is used for counting the quantity information of objects for executing target interaction operation in the target live broadcast room;
the terminal is used for acquiring the quantity information of objects for executing target interactive operation in the target live broadcast room from the server in the process of playing the live broadcast content; and playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a computer device 500 provided in the embodiment of the present application, in an actual application, the computer device 500 may be the terminal or the server 200 in fig. 2, and a computer device implementing the audio playing method in live broadcast in the embodiment of the present application is described by taking the computer device as the terminal shown in fig. 2 as an example. The computer device 500 shown in fig. 3 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in computer device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 3.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the live audio playing apparatus provided in this embodiment of the present application may be implemented in software, and fig. 3 illustrates a live audio playing apparatus 555 stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the first playing module 5551, the obtaining module 5552 and the second playing module 5553 are logical modules, and therefore, they may be arbitrarily combined or further separated according to the implemented functions.
The functions of the respective modules will be explained below.
In other embodiments, the audio playing Device in the live broadcast provided in this embodiment may be implemented in hardware, and as an example, the audio playing Device in the live broadcast provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the audio playing method in the live broadcast provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The audio playing method in live broadcasting provided by the embodiment of the present application will be described in conjunction with exemplary applications and implementations of the terminal provided by the embodiment of the present application.
Referring to fig. 4, fig. 4 is an alternative flowchart of a method for playing audio in a live broadcast provided by the embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
Step 401: and the terminal plays the live broadcast content of the target live broadcast room through the live broadcast interface.
In actual implementation, a client is arranged on the terminal, for example, a live client, and a user can live through the live client or watch live through the live client. The terminal here may be a terminal located on the live broadcast side or a terminal located on the viewer side.
In practical application, a terminal of a main broadcasting side collects live broadcast content, the live broadcast content comprises image data and audio data, then a live broadcast interface is displayed through a live broadcast client, the collected live broadcast content is played in the live broadcast interface, and the collected live broadcast content is sent to a server; the server distributes the live broadcast content to a terminal of a viewer side of the target live broadcast room; and after receiving the live broadcast content, the audience side terminal displays a live broadcast interface through the live broadcast client side and plays the live broadcast content of the target live broadcast room through the live broadcast interface.
Step 402: and in the process of playing the live broadcast content, acquiring the quantity information of objects for executing target interactive operation in the target live broadcast room.
Here, the quantity information may be a specific quantity, such as 98 people, or a quantity order, that is, the specific quantity is registered and divided, such as a hundred thousand order, a million order, and the like; the target interaction operation can be preset by the system or can be manually set by a user.
In actual implementation, for any terminal, a user object can execute target interaction operation in a target live broadcast room through a live broadcast client on the terminal, and when the terminal receives the target interaction operation, operation information corresponding to the target interaction operation is sent to a server; when the server receives the operation information, the server calculates the number of the objects for executing the target interactive operation, wherein the server calculates the specific number, and when the server sends the statistical result to the terminal, the server can send the specific number or convert the specific number into a number order and then send the number to the terminal.
It should be noted that the statistical quantity information may be a number of people, that is, if a certain object performs a plurality of target interaction operations, the quantity information is updated according to a specific number of times, for example, if the user a performs 3 times and the user B performs 2 times, the statistical quantity information is 5; the number of people may also be the number of people, that is, when a certain object performs a plurality of target interaction operations, the number information is increased by only 1, for example, if the user a performs 3 times and the user B performs 2 times, the statistical number information is 2.
In some embodiments, before obtaining information of the number of objects performing the target interaction operation in the target live broadcast room, a plurality of audio selection items may be presented in the live broadcast interface, where each audio selection item corresponds to at least one audio interaction operation; and responding to the selection operation of the target audio selection item in the plurality of audio selection items, and taking at least one audio interaction operation corresponding to the target audio selection item as the target interaction operation.
In actual implementation, target interaction operation may be set, that is, the terminal presents a plurality of audio selection items in a live interface, the user object may select one or more target audio selection items from the plurality of audio selection items, and then at least one audio interaction operation corresponding to the target audio selection item is used as the target interaction operation.
For example, fig. 5 is a schematic view of a live interface provided in the embodiment of the present application, and referring to fig. 5, a plurality of audio selection items 501, such as system sound effects, privilege of iron powder, and the like, are displayed in the live interface; and selecting a target audio selection item by clicking the audio selection item, and taking at least one audio interaction operation corresponding to the target audio selection item as a target interaction operation.
Here, for each target audio selection item, all audio interaction operations corresponding to the target audio selection item may be set as target interaction operations, or a part of audio interaction operations corresponding to the target audio selection item may be set as target interaction operations.
In practical implementation, for each target audio selection item, multiple audio interaction operations corresponding to the target audio selection item can be independently selected. For example, fig. 6 is a schematic view of a live broadcast interface provided in an embodiment of the present application, and referring to fig. 6, a plurality of audio selection items 601 are shown, when a click operation for a target audio selection item in the plurality of audio selection items is received, a plurality of audio interaction operations corresponding to the target audio selection item are shown, for example, when a system audio function item is clicked, a plurality of audio interaction operations 602 corresponding to the system audio function item are shown, so that selection of the audio interaction operation corresponding to the target audio selection item can be achieved.
In some embodiments, when the current terminal is a terminal on the anchor side, the set target interactive operation may be for all terminals that play live content of the target live broadcast room. For example, when the current terminal takes the clapping operation and the cheering operation as the target interactive operation, all terminals take the clapping operation and the cheering operation as the target interactive operation.
In some embodiments, the target interaction operation set by the terminal may be for itself, that is, each terminal may independently set the target interaction operation for itself, for example, if the current terminal uses applause operation and cheering operation as the target interaction operation, the terminals on other audience sides in the target live broadcast room may use cheering operation as the target interaction operation.
Step 403: and playing audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects.
In actual implementation, the corresponding relation between the quantity information and the sound effect is preset, and after the quantity information is obtained, the sound effect matched with the quantity information is obtained according to the corresponding relation between the quantity information and the sound effect. Here, a plurality of pieces of quantity information may correspond to one sound effect, each piece of quantity information may correspond to one sound effect, or one piece of quantity information may correspond to a plurality of sound effects.
By way of example, when the quantity information is of the order of magnitude, such as the quantity information may be of the order of ten (i.e., quantity 0-99), hundred (i.e., quantity 100-999), thousand (i.e., quantity 1000-9999), and so on. For example, when the amount of information is ten orders of magnitude, the audio data in the live content is played by adopting a special bathroom scene effect; and when the quantity information is thousands of orders, playing the audio data in the live broadcast content by adopting the scene special effect of the studio.
In some embodiments, the audio data in the live content can be played by adopting sound effects adapted to the quantity information of the objects in the following ways: acquiring a plurality of sound effects corresponding to different scenes; determining a target scene matched with the quantity information according to the quantity information of the objects; processing audio data in the live broadcast content by adopting a sound effect corresponding to the target scene to obtain audio data carrying scene information of the target scene; and playing the audio data carrying the scene information of the target scene.
Here, the correspondence between the scene and the sound effect may be one-to-one or one-to-many. That is, one scene may correspond to only one sound effect, or one scene may correspond to a plurality of sound effects.
In actual implementation, different scenes can correspond to different areas, the number of people that can be accommodated in the scenes with different areas is different, on the basis, the corresponding relation between the scenes and the quantity information can be constructed according to the number of people that can be accommodated in the area corresponding to each scene, and when the target scene matched with the quantity information is determined, the target scene can be determined according to the corresponding relation between the constructed scenes and the quantity information. Here, sound effects corresponding to scenes of different areas can be generated by sound space size simulation.
In practical applications, when audio data in the live content is processed through sound effects, all audio data in the live content may be processed, or only a part of audio data in the live content, such as only background music, may be processed.
In some embodiments, the audio data in the live content can be played by adopting sound effects adapted to the quantity information of the objects in the following ways: acquiring an audio material corresponding to target interactive operation; processing the audio material corresponding to the target interactive operation by adopting the sound effect matched with the quantity information of the objects; and superposing the processed audio material with the audio data in the live broadcast content, and playing the audio data superposed with the audio material.
Processing the audio material corresponding to the target interactive operation only through sound effect, and then overlapping the audio material to audio data; the number of target interaction operations may be one or more.
In practical implementation, after the audio material corresponding to the target interactive operation is processed through the sound effect matched with the number information of the objects, the number information is carried in the target audio material, and therefore when a user hears the played target audio material, the number information of the objects can be perceived.
In some embodiments, when the number of the target interactive operations is multiple, the target interactive operations correspond to the audio materials one to one; the audio materials corresponding to the target interaction operations can be respectively obtained, and the corresponding audio materials are processed by adopting the sound effects matched with the corresponding quantity information for executing the target interaction operations respectively to obtain a plurality of target audio materials; and superposing the target audio materials and the audio data in the live broadcast content, and playing the audio data superposed with the audio materials.
For example, when the target interaction operation includes a cheering operation, a clapping operation, and a drinking copy operation, the number of objects performing the cheering operation, the clapping operation, and the drinking copy operation is respectively obtained, for example, the number of objects performing the cheering operation is 20 ten thousand, the number of objects performing the clapping operation is 50 ten thousand, and the number of objects performing the drinking copy operation is 30 ten thousand; acquiring 20 ten thousand corresponding sound effects A, 50 ten thousand corresponding sound effects B and 30 ten thousand corresponding sound effects C, then processing drinking sound through the special effect A, processing drumbeat sound through the sound effect B, and processing drinking inverted sound through the special effect C; and then the processed applause sound, drumbeat sound and the drinking inverted lottery sound are superposed on the audio data for playing.
The method and the device have the advantages that the corresponding audio materials are processed by adopting the sound effects matched with the quantity information of the objects for executing the target interactive operations, so that a user can know the quantity information of the objects for executing the target interactive operations according to the played audio data.
In some embodiments, when the number of target interaction operations is multiple, information of the total number of objects performing all the target interaction operations and information of the corresponding number of objects performing each target interaction operation may be obtained; then, according to the corresponding quantity information of each target interaction operation, determining the playing volume of the audio material corresponding to each target interaction operation, and overlapping the audio material corresponding to each target interaction operation based on the corresponding playing volume to obtain an overlapped audio material; and then, acquiring sound effects matched with the total quantity information, processing the superposed audio materials, superposing the processed audio materials and audio data in the live content, and playing the audio data superposed with the audio materials.
For example, when the target interaction operation includes a cheering operation, a clapping operation, and a drinking copy operation, the number of objects performing the cheering operation, the clapping operation, and the drinking copy operation is respectively obtained, for example, the number of objects performing the cheering operation is 20 ten thousand, the number of objects performing the clapping operation is 50 ten thousand, and the number of objects performing the drinking copy operation is 30 ten thousand; then the total number is 100 ten thousand. Superposing the drinking sound with 20% volume, the drumbeat sound with 50% volume and the drinking inverted sound with 30% volume to obtain superposed audio materials; and processing the superposed audio material through a sound effect corresponding to 100 ten thousand, superposing the processed audio material and audio data in the live content, and playing the audio data superposed with the audio material.
Therefore, the information of the number of the objects for executing each target interactive operation can be embodied, and the information of the total number of the objects for executing the target interactive operation can also be embodied.
In some embodiments, the target interaction operation comprises a viewing operation and a plurality of audio interaction operations, each audio interaction operation corresponding to an audio material; and playing audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects: acquiring audio materials corresponding to the audio interaction operations; respectively adopting sound effects matched with the quantity information of the objects for executing the audio interaction operation to process the corresponding audio materials to obtain a plurality of target audio materials; and superposing the target audio materials and the audio data in the live broadcast content, and playing the audio data superposed with the target audio materials by adopting a sound effect matched with the quantity information of the objects for executing the watching operation.
Here, the viewing operation refers to an operation of viewing live content of a target live broadcast room; the number information of the objects for performing the viewing operation refers to the number information of the objects for online viewing of the live content of the target live broadcast room.
In actual implementation, the target interactive operation includes two types of interactive operations, namely a watching operation and a plurality of audio interactive operations, wherein the watching operation is triggered as long as the terminal plays the live content of the target live broadcast room through the playing interface; and the audio interaction operation is triggered by a playing interface when the user object watches the live content of the target live broadcast room.
As an example, when the audio interaction operation includes a cheering operation and a clapping operation, the number of objects for performing the cheering operation, the clapping operation and the drinking and gaming operation is respectively obtained, for example, the number of objects for performing the cheering operation is 20 ten thousand, and the number of objects for performing the clapping operation is 50 ten thousand; and the number of objects performing the viewing operation, such as 100 ten thousand. Acquiring 20 ten thousand corresponding sound effects A, 50 ten thousand corresponding sound effects B and 100 ten thousand corresponding sound effects C, then processing the drinking sound through the special effect A, and processing the drumbeat sound through the sound effect B; and then, overlapping the audio data in the drunk sound, the drumhead sound and the live broadcast content, processing the audio data overlapped with the drunk sound and the drumhead sound through a sound effect C, and playing the audio data obtained by processing after the processing is finished.
In some embodiments, the terminal may further present, when the target interaction operation is an audio interaction operation for playing a target audio material, an interaction function item for triggering the audio interaction operation in the live broadcast interface; and responding to the triggering operation aiming at the interactive function item, and playing the target audio material.
Here, the plurality of interactive function items of the audio interactive operation may be respectively corresponding to different audio materials, and when a trigger operation for a certain interactive function item is received, a target audio material corresponding to the interactive function item is obtained and played.
In some embodiments, the target audio material is only played at the local end, that is, when the user object triggers the interactive function item, the terminal plays the target audio material and sends the interactive information to the server to inform the server that the user object executes the target interactive operation, so that the server can count the quantity information of the objects executing the target interactive operation.
In some embodiments, the target audio material is played at all terminals, that is, when the user object triggers the interactive function item, the terminal sends a playing instruction of the target audio material to the server, and the server adds the target audio material to the audio data of the live content and distributes the audio data to each terminal, so that each terminal plays the audio data added with the target audio material.
In some embodiments, in response to a trigger operation for an interactive function item, playing the target audio material includes: acquiring an operation type corresponding to the trigger operation, and determining a playing time length matched with the operation type; and playing the target audio material according to the playing time length.
The operation types can be divided according to the number of continuous click operations, for example, the operation types can be click operations, double click operations and the like, different operation types correspond to different playing time lengths, for example, when the time length of a sound effect is 2 seconds, the playing time length corresponding to the click operations is 2 seconds, the playing time length corresponding to the double click operations is 4 seconds, and the like.
In some embodiments, the terminal may obtain information on the number of objects performing the target interactive operation in the target live broadcast room by: acquiring an object information list with social relation with a current user object; determining the quantity information of objects having social relations with the current object in the objects for executing the target interaction operation in the target live broadcast room based on the object information list; the terminal can play audio data in the live content by adopting the sound effect matched with the quantity information of the objects in the following mode: and playing audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects having social relations with the current object.
In actual implementation, the terminal acquires an object information list, such as a friend list and an attention list, having a social relationship with a current user object from the server, then acquires an object for executing target interaction operation in the target live broadcast room from the server, determines an object having a social relationship with the current object from the object for executing target interaction operation according to the object information list, and performs quantity information statistics to obtain quantity information of the objects having a social relationship with the current object from the objects for executing target interaction operation in the target live broadcast room. And after the quantity information is obtained, obtaining the sound effect matched with the quantity information, processing the audio data in the live broadcast content by adopting the obtained sound effect, and playing the audio data obtained by processing.
Since the user object may only be interested in objects with which there is a social relationship, it is possible to let the user object feel the feeling of seeing live together with friends.
In some embodiments, the terminal may only send the object information of the current user object to the server, the server searches for an object information list of the user object, then, according to the object information list, statistics is performed on quantity information of objects which execute target interaction operations in the target live broadcast room and have social relationships with the current user object, and the quantity information obtained through the statistics is sent to the terminal, and the terminal directly plays audio data in live broadcast content based on the quantity information by using a sound effect adapted to the quantity information.
In some embodiments, the terminal may further receive, when the target interaction operation is an audio interaction operation for playing a target audio material, play indication information for indicating that the target audio material is played with a target volume; the target volume corresponds to an object distance, wherein the object distance is a distance between a geographical position of an object for executing the audio interaction operation and a geographical position of a current user object; and playing the target audio material by adopting the target volume.
In practical implementation, when a certain terminal receives a target interaction operation, a playing instruction of a target audio material is sent to a server, the server respectively calculates the distance between the position of each terminal and the position of the terminal sending the playing instruction, and calculates the volume of the target audio material played by each terminal according to the distance, wherein the distance is inversely proportional to the playing volume, namely the farther the distance is, the smaller the playing volume is; and after the volume of the target audio material played by each terminal is obtained, generating playing indication information corresponding to each terminal, and sending the playing indication information to the corresponding terminal, so that each terminal plays the target audio material by adopting the volume corresponding to the playing indication information according to the playing indication information.
For the terminal, after receiving the playing indication information, the terminal analyzes the playing indication information to determine the target volume and judge whether the terminal caches the target audio material, if so, the target audio material is played by adopting the target volume, otherwise, an acquisition request of the target audio material needs to be sent to the server to acquire the target audio material from the server, and after the target audio material is acquired, the target audio material is played by adopting the target volume.
In some embodiments, when an object newly added to the target live broadcast room exists, identity information of the object can be acquired; and playing the incoming audio data by adopting a sound effect matched with the identity information.
In actual implementation, when a new object enters the target live broadcasting room, the user object can be reminded of having an information object to perform the target live broadcasting room through the audio data. Here, for objects with different identity information, different sound effects are adopted to play the incoming audio data, so that different objects can be distinguished through the incoming audio data.
Here, the identity information may be the intimacy between the object entering the live broadcast room and the object of the current user, for example, when a friend enters a target live broadcast room, the sound effect a is used to play the incoming audio data; when a stranger enters a target live broadcast room, sound effect B is adopted to play the incoming audio data; the identity information may also be a user level of the object, such as a membership level, for which different sound effects are used.
In some embodiments, the terminal may obtain information on the number of objects performing the target interactive operation in the target live broadcast room by: when the quantity information is the quantity magnitude of the object for executing the target interactive operation, acquiring a first quantity magnitude of the object for executing the target interactive operation in the target live broadcast room; correspondingly, when the number magnitude is switched from the first number magnitude to the second number magnitude, the terminal can adjust the sound effect of the audio data from the sound effect matched with the first number magnitude to the sound effect matched with the second number magnitude.
In actual implementation, when the quantity information is in a quantity order, a first quantity order of an object for executing target interaction operation in a target live broadcast room is obtained, and audio data in live broadcast content is played by adopting a sound effect corresponding to the first quantity order; and in the playing process, the quantity magnitude of the object executing the target interaction operation is continuously monitored, and when the quantity magnitude changes, namely the quantity magnitude is switched from the first quantity magnitude to the second quantity magnitude, the sound effect of the audio data is switched. Here, the magnitude order may be acquired in real time or periodically.
In some embodiments, the terminal may further obtain a target scene corresponding to the sound effect; and in the process of playing the audio data, playing the animation special effect matched with the target scene.
In practical implementation, when the sound effect is the sound effect of the corresponding scene, the target scene corresponding to the sound effect can be obtained, and the animation special effect corresponding to the target scene is searched from the animation special effects. For example, when the sound effect is a studio scene sound effect, the image special effect of the studio scene is displayed in the live interface.
Therefore, the user can perceive the corresponding quantity information of the executed target interaction operation from the aspects of hearing sense and vision sense, and the perception of the corresponding quantity information of the executed target interaction operation is further enhanced.
The animation special effect can be displayed on the image data of the live content in a floating mode, and can also be fused with the image data of the live content.
In some embodiments, the terminal may determine the sound effect adapted to the number information of the objects by: determining a plurality of candidate sound effects associated with the live content according to the live content; and determining the sound effect matched with the quantity information of the object based on the corresponding relation between the quantity information and each candidate sound effect.
Here, for different live broadcast contents, different candidate sound effects can be set, for example, when the live broadcast contents are singing, the candidate sound effects can be KTV scene sound effects, concert scene sound effects and the like; when the live content is shopping, the candidate sound effect may be a street scene sound effect, a mall scene sound effect, or the like.
In actual implementation, the terminal can identify the live content according to image data and/or audio data in the live content to determine the category to which the live content belongs, and then screen out candidate sound effects associated with the live content according to the determined category, wherein one or more category labels can be set for each candidate sound effect, and then the determined category is matched with the labels of the candidate sound effects to obtain a plurality of candidate sound effects associated with the live content; and then determining the sound effect matched with the quantity information of the object based on the corresponding relation between the quantity information and each candidate sound effect.
In some embodiments, the terminal may play the audio data in the live content using sound effects adapted to the quantity information of the objects, including: when a target live event is received, acquiring audio materials associated with the target live event; and playing the audio material associated with the target live event by adopting the sound effect matched with the quantity information of the objects.
In actual implementation, the target live event and the audio material associated with the target live event can be preset, and in the process of playing the live content of the target live broadcast room, the live broadcast content is monitored to judge whether the target live broadcast event is received; and when the target live event is detected, playing the audio material associated with the target live event by adopting a sound effect matched with the quantity information of the objects.
Here, for different types of live broadcasts, different target live broadcast events may be set, for example, when the live broadcast type is a singing type live broadcast, a singing-to-high tone part may be set as the target live broadcast event, and then, when it is determined that the anchor broadcast sings to the high tone part, the applause sound may be played with sound effects adapted to the number information of the objects; when the live broadcast type is game type live broadcast, five enemies which are continuously killed can be set as target live broadcast events, and when the main broadcast is determined to continuously kill the five enemies, sound effects matched with the number information of the objects are adopted to play drinking sounds.
By applying the embodiment, the quantity information of the objects for executing the target interactive operation in the target live broadcast room is acquired in the process of playing the live broadcast content; playing audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects; therefore, through different sound effects, the user can perceive the quantity information of the objects for executing the target interactive operation in the target live broadcast room, and compared with the quantity information of the objects for executing the target interactive operation in the target live broadcast room embodied in an image display mode, the image display resources can be saved, and the perception of the quantity information of the objects for executing the target interactive operation is enhanced.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. In actual implementation, a plurality of sound effects can be preset at the background management end, each sound effect corresponds to a different area, correspondingly, different areas correspond to different magnitude levels, for example, sound effects without echoes in a 100-square room are used in a 10-person live broadcast room, and by analogy, hundreds of people and hundreds of millions of audiences are used.
The client side obtains the watching headcount of the live broadcast room, the corresponding relation between the sound effect and the quantity magnitude, the type and the number of the received audio interaction operation, the user position information and the friend interaction information. And then, combining the acquired information, processing the audio data in the live broadcast room by adopting a sound effect corresponding to the acquired information, and playing the audio data obtained by processing.
For example, when 100 ten thousand people watch in a target live broadcast room (total number of watching), 50 ten thousand people perform drumbeat interactive operation, 30 ten thousand carry out cheer interactive operation, and 20 ten thousand people do not have interactive operation. Determining a sound effect corresponding to 100 ten thousand, such as a sound effect B, according to the corresponding relation between the sound effect and the quantity magnitude; then, the proportion of the number of various types of audio interaction operations is obtained, for example, 50% of applause and 30% of applause are used, then audio materials corresponding to the audio interaction operations are overlapped, namely, 50% of applause and 30% of applause are overlapped, then the audio materials obtained through overlapping are processed through a sound effect B, and the audio data obtained through processing is played while background music in a target live broadcast room is played. Here, to avoid that the anchor and the user can feel the audio data without affecting the anchor's speech or performance, the volume of the audio data may be turned down or the volume of the anchor's speech may be turned up when the anchor's speech is detected.
In some embodiments, the anchor may set the audio interaction operations that can be received in a setting interface before playing, where the audio interaction operations may be in a mode of combining several types of sound effects, or may be individually subdivided and set item by item.
For example, fig. 7 is a schematic view of a setting interface provided in an embodiment of the present application, and referring to fig. 7, an audio interaction control 701, a plurality of audio interaction options 702, and a more permission control 703 are presented in the setting interface, where a user of the audio interaction control opens or closes an audio interaction function, and only when the audio interaction function is opened, a viewer can trigger an audio interaction operation; when the audio interaction function is started, the type of the received audio interaction operation can be set through the audio interaction option; here, each audio interaction selection item corresponds to one or more audio interaction operations. When a trigger operation for the more rights control 703 is received, more audio interaction choices 704 are presented; here, the audio interaction operation corresponding to each audio interaction option may be further set, that is, all the audio interaction operations corresponding to the audio interaction option may be selected to be received, or only a part of the audio interaction operations corresponding to the audio interaction option may be selected to be received. For example, when the system sound effect function item is clicked, the sound material 705 corresponding to the plurality of audio interaction operations corresponding to the system sound effect function item is displayed, where the sound material may be a relatively short sound-making word (sound effect) or a complex word or sentence (voice).
Here, for the audience, the audience can control the sound effect duration: suppose a sound effect has its own duration (N) s, playing (N) s with one click, playing (N x 3) s with 3 consecutive clicks, and so on. For example, fig. 8 is a schematic view of a live interface provided in the embodiment of the present application, and referring to fig. 8, an audio interaction function item is displayed in the live interface, when a user clicks the audio interaction function item 801, an audio material corresponding to the audio interaction function item is played, a time length to be played is determined according to the number of clicks of the user, and a remaining time length is displayed at a position where the audio interaction function item 801 is located.
It should be noted that the interactive feedback includes, but is not limited to: interactive sound effect, interactive dynamic effect, mobile phone vibration feedback and the like.
In some embodiments, the audience has a dominant sound source, that is, the audio material corresponding to the audio interaction operation triggered by the audience is played with a larger volume; or, according to the closeness of the friend, the volume corresponding to the closeness is adopted to play the audio material corresponding to the audio interaction operation triggered by the corresponding user, for example, the audio material corresponding to the audio interaction operation triggered by the user with higher closeness of the current user is played with higher volume; or, the playing volume of the audio material is controlled according to the geographical position, and for example, the audio material corresponding to the audio interaction operation triggered by the user closer to the current user is played with a larger volume.
Fig. 9 is a schematic structural diagram of an audio data playing system in live broadcast provided in an embodiment of the present application, and referring to fig. 9, the audio data playing system includes a client, a CDN, and a service server.
In actual implementation, the client collects the interaction information, the IP address and the friend list information of the user in real time; the client updates the CDN every 60 seconds, where the client transmission is triggered by an online clock; the business server carries out logical operation on the data, wherein for the anchor terminal, the proportion and the interaction strength of each interaction type in the total number of the viewers are calculated, and for the audience terminal, the proportion of each interaction type in the total number of the viewers, the number of the interaction persons at the similar IP addresses and the number of the interaction persons of the friends are calculated; and determining the scene sound effect corresponding to each terminal according to the logical operation result so as to output audio data by adopting the corresponding scene sound effect.
Because the brain needs about 0.25 second to process visual information, but only 0.05 second is needed to process auditory information, and the sound can bring about emotional companions which other forms cannot match, the embodiment can improve the perception of the number of people watching in the live broadcast room and increase the heat and participation of live broadcast.
Continuing with the exemplary structure of the live audio playback device 555 provided by the embodiment of the present application implemented as software modules, in some embodiments, as shown in fig. 3, the software modules stored in the live audio playback device 555 in the memory 550 may include:
the first playing module 5551 is configured to play the live content of the target live broadcast room through the live broadcast interface;
an obtaining module 5552, configured to obtain information about the number of objects performing target interaction operations in the target live broadcast room during the process of playing the live broadcast content;
the second playing module 5553 is configured to play the audio data in the live content by using a sound effect adapted to the quantity information of the objects.
In some embodiments, the second playing module is further configured to obtain a plurality of sound effects corresponding to different scenes;
determining a target scene matched with the quantity information according to the quantity information of the objects;
processing the audio data in the live broadcast content by adopting the sound effect corresponding to the target scene to obtain the audio data carrying the scene information of the target scene;
and playing the audio data carrying the scene information of the target scene.
In some embodiments, the second playing module is further configured to obtain an audio material corresponding to the target interaction operation;
processing the audio material corresponding to the target interactive operation by adopting the sound effect matched with the quantity information of the objects;
and superposing the processed audio material with the audio data in the live broadcast content, and playing the audio data superposed with the audio material.
In some embodiments, the target interaction operation comprises a viewing operation and a plurality of audio interaction operations, each of the audio interaction operations corresponding to an audio material;
the second playing module is further configured to obtain audio materials corresponding to the audio interaction operations;
respectively adopting sound effects matched with the quantity information of the objects for executing the audio interaction operation to process corresponding audio materials to obtain a plurality of target audio materials;
superposing a plurality of target audio materials and audio data in the live broadcast content, and
and playing the audio data superposed with the target audio materials by adopting a sound effect matched with the quantity information of the objects for executing the watching operation.
In some embodiments, the obtaining module is further configured to present, in the live interface, a plurality of audio selection items, each audio selection item corresponding to at least one audio interaction operation;
and responding to the selection operation of a target audio selection item in the plurality of audio selection items, and taking at least one audio interaction operation corresponding to the target audio selection item as a target interaction operation.
In some embodiments, the second playing module is further configured to, when the target interaction operation is an audio interaction operation for playing a target audio material, present an interaction function item for triggering the audio interaction operation in a live interface;
and responding to the triggering operation aiming at the interactive function item, and playing the target audio material.
In some embodiments, the second playing module is further configured to obtain an operation type corresponding to the trigger operation, and determine a playing duration matched with the operation type;
and playing the target audio material according to the playing duration.
In some embodiments, the obtaining module is further configured to obtain an object information list having a social relationship with a current user object;
determining the quantity information of objects having social relations with the current object in the objects for executing the target interaction operation in the target live broadcast room based on the object information list;
and the second playing module is also used for playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects having social relations with the current object.
In some embodiments, the second playing module is further configured to receive, when the target interaction operation is an audio interaction operation for playing a target audio material, playing indication information for indicating that the target audio material is played at a target volume;
the target volume corresponds to an object distance, wherein the object distance is a distance between a geographic position of an object for executing the audio interaction operation and a geographic position of a current user object;
and playing the target audio material by adopting the target volume.
In some embodiments, the second playing module is further configured to, when an object newly added to the target live broadcast room exists, obtain identity information of the object;
and playing the incoming audio data by adopting a sound effect matched with the identity information.
In some embodiments, the obtaining module is further configured to obtain a first quantity order of objects performing the target interaction operation in the target live broadcast room when the quantity information is the quantity order of the objects performing the target interaction operation;
and the second playing module is also used for adjusting the sound effect of the audio data to the sound effect matched with the second quantity magnitude by the sound effect matched with the first quantity magnitude when the quantity magnitude is switched from the first quantity magnitude to the second quantity magnitude.
In some embodiments, the second playing module is further configured to acquire a target scene corresponding to the sound effect;
and playing the animation special effect matched with the target scene in the process of playing the audio data.
In some embodiments, the second playing module is further configured to determine, based on a correspondence between the quantity information and the candidate audio effect, an audio effect that is adapted to the quantity information of the object;
and matching the quantity information of the objects with the quantity information corresponding to each candidate sound effect to obtain the sound effect matched with the quantity information of the objects.
In some embodiments, the second playing module is further configured to, when a target live event is received, obtain audio material associated with the target live event;
and playing audio materials related to the target live event by adopting a sound effect matched with the quantity information of the objects.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the audio playing method in live broadcast described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 4.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (17)

1. An audio playing method in live broadcasting is characterized by comprising the following steps:
playing live broadcast content of a target live broadcast room through a live broadcast interface;
in the process of playing the live broadcast content, acquiring the quantity information of objects for executing target interaction operation in the target live broadcast room;
and playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects.
2. The method of claim 1, wherein the playing the audio data in the live content using a sound effect adapted to the information on the number of the objects comprises:
acquiring a plurality of sound effects corresponding to different scenes;
determining a target scene matched with the quantity information according to the quantity information of the objects;
processing the audio data in the live broadcast content by adopting the sound effect corresponding to the target scene to obtain the audio data carrying the scene information of the target scene;
and playing the audio data carrying the scene information of the target scene.
3. The method of claim 1, wherein the playing the audio data in the live content using a sound effect adapted to the information on the number of the objects comprises:
acquiring an audio material corresponding to the target interaction operation;
processing the audio material corresponding to the target interactive operation by adopting the sound effect matched with the quantity information of the objects;
and superposing the processed audio material with the audio data in the live broadcast content, and playing the audio data superposed with the audio material.
4. The method of claim 1, wherein the target interaction operation comprises a viewing operation and a plurality of audio interaction operations, each of the audio interaction operations corresponding to an audio material;
the audio data in the live broadcast content is played by adopting the sound effect matched with the quantity information of the objects, and the method comprises the following steps:
acquiring audio materials corresponding to the audio interaction operations;
respectively adopting sound effects matched with the quantity information of the objects for executing the audio interaction operation to process corresponding audio materials to obtain a plurality of target audio materials;
superposing a plurality of target audio materials and audio data in the live broadcast content, and
and playing the audio data superposed with the target audio materials by adopting a sound effect matched with the quantity information of the objects for executing the watching operation.
5. The method of claim 1, wherein before obtaining information of the number of objects performing the target interactive operation in the target live broadcast room, the method further comprises:
presenting a plurality of audio selection items in a live interface, wherein each audio selection item corresponds to at least one audio interaction operation;
and responding to the selection operation of a target audio selection item in the plurality of audio selection items, and taking at least one audio interaction operation corresponding to the target audio selection item as a target interaction operation.
6. The method of claim 1, further comprising:
when the target interaction operation is an audio interaction operation for playing a target audio material, presenting an interaction function item for triggering the audio interaction operation in a live broadcast interface;
and responding to the triggering operation aiming at the interactive function item, and playing the target audio material.
7. The method of claim 6, wherein the playing the target audio material in response to the triggering operation for the interactive function item comprises:
acquiring an operation type corresponding to the trigger operation, and determining a playing time length matched with the operation type;
and playing the target audio material according to the playing duration.
8. The method of claim 1, wherein the obtaining information of the number of objects performing the target interactive operation in the target live broadcast room comprises:
acquiring an object information list with social relation with a current user object;
determining the quantity information of objects having social relations with the current object in the objects for executing the target interaction operation in the target live broadcast room based on the object information list;
the audio data in the live broadcast content is played by adopting the sound effect matched with the quantity information of the objects, and the method comprises the following steps:
and playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects having social relations with the current object.
9. The method of claim 1, further comprising:
when the target interaction operation is an audio interaction operation for playing a target audio material, receiving playing indication information for indicating that the target audio material is played by adopting a target volume;
the target volume corresponds to an object distance, wherein the object distance is a distance between a geographic position of an object for executing the audio interaction operation and a geographic position of a current user object;
and playing the target audio material by adopting the target volume.
10. The method of claim 1, wherein the method further comprises:
when an object newly added into the target live broadcast room exists, acquiring identity information of the object;
and playing the incoming audio data by adopting a sound effect matched with the identity information.
11. The method of claim 1, wherein the obtaining information of the number of objects performing the target interactive operation in the target live broadcast room comprises:
when the quantity information is the quantity magnitude of the object for executing the target interaction operation, acquiring a first quantity magnitude of the object for executing the target interaction operation in the target live broadcast room;
the method further comprises the following steps:
when the quantity magnitude is switched from the first quantity magnitude to the second quantity magnitude, the sound effect of the audio data is adjusted to the sound effect matched with the second quantity magnitude from the sound effect matched with the first quantity magnitude.
12. The method of claim 1, wherein the method further comprises:
acquiring a target scene corresponding to the sound effect;
and playing the animation special effect matched with the target scene in the process of playing the audio data.
13. The method of claim 1, wherein before playing the audio data in the live content using the sound effect adapted to the information on the number of the objects, the method further comprises:
determining a plurality of candidate sound effects associated with the live content according to the live content;
and determining the sound effect matched with the quantity information of the object based on the corresponding relation between the quantity information and the candidate sound effect.
14. The method of claim 1, wherein the playing the audio data in the live content using a sound effect adapted to the information on the number of the objects comprises:
when a target live event is received, acquiring audio materials associated with the target live event;
and playing audio materials related to the target live event by adopting a sound effect matched with the quantity information of the objects.
15. An audio playback apparatus in a live broadcast, comprising:
the first playing module is used for playing live broadcast content of a target live broadcast room through a live broadcast interface;
the acquisition module is used for acquiring the quantity information of objects for executing target interaction operation in the target live broadcast room in the process of playing the live broadcast content;
and the second playing module is used for playing the audio data in the live broadcast content by adopting a sound effect matched with the quantity information of the objects.
16. A computer device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of audio playback in a live broadcast of any of claims 1 to 14 when executing executable instructions stored in the memory.
17. A computer-readable storage medium storing executable instructions for implementing a method of audio playback in a live broadcast as claimed in any one of claims 1 to 14 when executed by a processor.
CN202110441630.2A 2021-04-23 2021-04-23 Audio playing method, device, equipment and storage medium in live broadcast Pending CN113031906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110441630.2A CN113031906A (en) 2021-04-23 2021-04-23 Audio playing method, device, equipment and storage medium in live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110441630.2A CN113031906A (en) 2021-04-23 2021-04-23 Audio playing method, device, equipment and storage medium in live broadcast

Publications (1)

Publication Number Publication Date
CN113031906A true CN113031906A (en) 2021-06-25

Family

ID=76457478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110441630.2A Pending CN113031906A (en) 2021-04-23 2021-04-23 Audio playing method, device, equipment and storage medium in live broadcast

Country Status (1)

Country Link
CN (1) CN113031906A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573084A (en) * 2021-07-21 2021-10-29 广州方硅信息技术有限公司 Live broadcast interaction method, system, device, equipment and storage medium
CN113596490A (en) * 2021-07-12 2021-11-02 腾讯科技(深圳)有限公司 Live broadcast interaction method and device, storage medium and electronic equipment
CN114866791A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Sound effect switching method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106464939A (en) * 2016-07-28 2017-02-22 北京小米移动软件有限公司 Method and device for playing sound effect
CN106559694A (en) * 2016-09-29 2017-04-05 广州华多网络科技有限公司 A kind of method and device that user's admission scene is rendered for online direct broadcasting room
CN112165628A (en) * 2020-09-29 2021-01-01 广州繁星互娱信息科技有限公司 Live broadcast interaction method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106464939A (en) * 2016-07-28 2017-02-22 北京小米移动软件有限公司 Method and device for playing sound effect
CN106559694A (en) * 2016-09-29 2017-04-05 广州华多网络科技有限公司 A kind of method and device that user's admission scene is rendered for online direct broadcasting room
CN112165628A (en) * 2020-09-29 2021-01-01 广州繁星互娱信息科技有限公司 Live broadcast interaction method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596490A (en) * 2021-07-12 2021-11-02 腾讯科技(深圳)有限公司 Live broadcast interaction method and device, storage medium and electronic equipment
CN113596490B (en) * 2021-07-12 2022-11-08 腾讯科技(深圳)有限公司 Live broadcast interaction method and device, storage medium and electronic equipment
CN113573084A (en) * 2021-07-21 2021-10-29 广州方硅信息技术有限公司 Live broadcast interaction method, system, device, equipment and storage medium
CN114866791A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Sound effect switching method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108184144B (en) Live broadcast method and device, storage medium and electronic equipment
CN108401175B (en) Barrage message processing method and device, storage medium and electronic equipment
US10496358B1 (en) Directional audio for virtual environments
CN111711831B (en) Data processing method and device based on interactive behavior and storage medium
US20220360825A1 (en) Livestreaming processing method and apparatus, electronic device, and computer-readable storage medium
CN113031906A (en) Audio playing method, device, equipment and storage medium in live broadcast
CN105450642B (en) It is a kind of based on the data processing method being broadcast live online, relevant apparatus and system
CN112087655B (en) Method and device for presenting virtual gift and electronic equipment
CN113727130B (en) Message prompting method, system and device for live broadcasting room and computer equipment
JP2018523386A (en) Streaming media presentation system
Bracken et al. Sounding out small screens and telepresence
CA2634201A1 (en) Social network-enabled interactive media player
US10506268B2 (en) Identifying media content for simultaneous playback
WO2012135048A2 (en) Systems and methods for capturing event feedback
KR20080106401A (en) Streaming media casts, such as in a video game or mobile device environment
CN102170591A (en) Content playing device
WO2019047850A1 (en) Identifier displaying method and device, request responding method and device
CN113613027B (en) Live broadcast room recommendation method and device and computer equipment
CN111970521B (en) Live broadcast method and device of virtual anchor, computer equipment and storage medium
CN113438492B (en) Method, system, computer device and storage medium for generating title in live broadcast
CN114007095A (en) Voice microphone-connecting interaction method, system, medium and computer equipment for live broadcast room
CN114449301B (en) Item sending method, item sending device, electronic equipment and computer-readable storage medium
KR20200028830A (en) Real-time computer graphics video broadcasting service system
CN110166801B (en) Media file processing method and device and storage medium
CN114513679B (en) Live broadcast room recommendation method, system and computer equipment based on audio pre-playing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046390

Country of ref document: HK