CN109529335B - Game role sound effect processing method and device, mobile terminal and storage medium - Google Patents

Game role sound effect processing method and device, mobile terminal and storage medium Download PDF

Info

Publication number
CN109529335B
CN109529335B CN201811315169.0A CN201811315169A CN109529335B CN 109529335 B CN109529335 B CN 109529335B CN 201811315169 A CN201811315169 A CN 201811315169A CN 109529335 B CN109529335 B CN 109529335B
Authority
CN
China
Prior art keywords
game
sound effect
effect processing
role
processing algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811315169.0A
Other languages
Chinese (zh)
Other versions
CN109529335A (en
Inventor
朱克智
严锋贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811315169.0A priority Critical patent/CN109529335B/en
Publication of CN109529335A publication Critical patent/CN109529335A/en
Application granted granted Critical
Publication of CN109529335B publication Critical patent/CN109529335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization

Abstract

The embodiment of the application discloses a method and a device for processing sound effects of game roles, a mobile terminal and a storage medium, wherein the method comprises the following steps: acquiring a target position of a target game role in a game scene; determining N game roles except for the target game role within a preset distance range from the target position in the game scene; determining a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, wherein the first game role is any one of N game roles; and processing the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role. The embodiment of the application can enrich the sound effect of the game role.

Description

Game role sound effect processing method and device, mobile terminal and storage medium
Technical Field
The application relates to the technical field of audio, in particular to a method and a device for processing sound effect of a game role, a mobile terminal and a storage medium.
Background
With the widespread application of mobile terminals (such as mobile phones, tablet computers, and the like), the applications that the mobile terminals can support are increasing, the functions are becoming more and more powerful, and the mobile terminals are developing towards diversification and personalization directions, and becoming indispensable electronic products in the life of users. Currently, in a game scene, sounds made by game characters can be presented in the form of three-dimensional (3-dimensional) sound effects. However, the sound effects of the sound of the game character are fixed, and the sound effects of the game character are relatively single.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing sound effects of game roles, a mobile terminal and a storage medium, which can enrich the sound effects of the game roles.
In a first aspect, an embodiment of the present application provides a method for processing sound effects of a game character, including:
acquiring a target position of a target game role in a game scene;
determining N game roles except the target game role within a preset distance range from the target position in the game scene;
determining a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, wherein the first game role is any one of the N game roles;
and processing the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
In a second aspect, an embodiment of the present application provides a device for processing sound effects of a game character, including:
the game system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target position of a target game role in a game scene;
the first determining unit is used for determining N game characters except the target game character within a preset distance range from the target position in the game scene;
a second determining unit, configured to determine, based on a game state of a first game character, a first sound effect processing algorithm corresponding to the first game character, where the first game character is any one of the N game characters;
and the sound effect processing unit is used for processing the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
In a third aspect, an embodiment of the present application provides a mobile terminal, including a processor, and a memory, where the memory is configured to store one or more programs, where the one or more programs are configured to be executed by the processor, and where the program includes instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the sound effect processing method for the game role described in the embodiment of the present application, the mobile terminal obtains the target position of the target game role in the game scene; determining N game roles except for the target game role within a preset distance range from the target position in the game scene; determining a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, wherein the first game role is any one of N game roles; and processing the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role. The sound effect of the first game role relative to the target game role can be determined, the sound effects of other surrounding game roles can be presented as far as possible in the angle of the target game role, the sound effects of other game roles can be determined according to the game states of other surrounding game roles, and the sound effects of the game roles are enriched.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a method for processing sound effects of a game character according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an analog transmission of an audio signal according to an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating another sound effect processing method for a game character according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a sound effect processing apparatus for a game character according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a mobile terminal disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The Mobile terminal according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic flow chart of a sound effect processing method for a game character according to an embodiment of the present application, and as shown in fig. 1, the sound effect processing method for a game character includes the following steps.
101, the mobile terminal acquires the target position of the target game character in the game scene.
In the embodiment of the application, various types of multiplayer online tactical sports games can be run on the mobile terminal, for example, a multiplayer online sniping sports game, a multiplayer online role playing type sports game, a multiplayer online spinning vehicle sports game and the like. When the multiplayer online tactical competitive game runs, each game player can play the game online, and each game player can select one game scene in the game and also can select one game role in the game scene. The game characters in the multiplayer online tactical competitive game can be divided into two pairs (such as a red team and a blue team) for fighting, and the winner team or the team who achieves a certain achievement (such as destroying the base of the other party) can obtain the final winning. The number of game characters in the game scene is at least two, and the number of game characters in different game scenes is not necessarily the same.
The game role can move in the game scene, the position of the game role in the game scene is different, and the sound effect which can be obtained by the game role is also different. The method and the device for obtaining the sound effect of the target game role can obtain the sound effect which can be obtained by the target game role from the angle of the target game role played by the game user. The sound effect that can be obtained by the target game character may include a sound effect corresponding to the audio frequency of other game characters (the game character of the player or the game character of the opposite player) around the target game character, and may also include a sound effect corresponding to background music in a game scene.
Since the game scene is preloaded, when the game runs, the mobile terminal can determine the position of the target game character in the game scene according to the coordinates of the target game character in the game scene, and record the position of the target game character in the game scene as the target position.
102, the mobile terminal determines N game characters except the target game character within a preset distance range from the target position in the game scene.
In the embodiment of the present application, the preset distance range may be preset and stored in a memory (e.g., a non-volatile memory) of the mobile terminal. For example, the preset distance range may be set to 5 meters. The preset distance range may be determined based on a game scene.
With the target position as a center, the mobile terminal determines whether other game characters exist within a preset distance range from the target position in the game scene, and if other game characters exist, determines that the number of other game characters except the target game character within the preset distance range from the target position in the game scene is N, and then executes step 103. And if no other game role exists, the mobile terminal continues to execute the step of determining whether the other game role exists in the preset distance range from the target position in the game scene.
Wherein N is a positive integer. The N game characters may be own game characters, opponent game characters, or both of them. In step 102, N game characters except for the target game character within a preset distance range from the target position in the game scene are determined, so that the sound effects of the N game characters are conveniently processed in the following process. The sound effects of other game characters outside the preset distance range from the target position are not processed, and only the sound effects of other game characters close to the target game character are processed, so that the complexity of sound effect processing of the game character can be reduced.
103, the mobile terminal determines a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, wherein the first game role is any one of the N game roles.
In the embodiment of the present application, the game state of the first game character may include a blood volume value, a legal value, an energy value, and the like of the first game character.
The game state can be determined according to the fighting state and the game progress of the game characters, for example, when a first game character is in a battle with other game characters of the opposite party, the game state of the first game character such as a blood volume value, a legal value and an energy value can be changed. As the game progresses, the game state of the first game character changes correspondingly, for example, as the game progresses frequently, the level of the first game character increases correspondingly, and the blood volume value, the legal value and the energy value of the first game character increase correspondingly. The blood value of the first game character can also be understood as the life value of the first game character.
Wherein, different game states correspond to different sound effect processing algorithms. For example, different blood volume values, different normal force values, and different energy values may correspond to different sound processing methods. The target game role can quickly guess the states of other game roles according to the sound effects of other game roles, and can determine whether to fight with other game roles.
Optionally, the mobile terminal may further determine a first sound effect processing algorithm corresponding to the first game character based on the game progress.
Optionally, the mobile terminal may further determine a first sound effect processing algorithm corresponding to the first game character based on the game state and the game progress of the first game character.
Optionally, in step 103, the mobile terminal determines, based on the game state of the first game character, a first sound effect processing algorithm corresponding to the first game character, including the following steps:
(11) the mobile terminal acquires a corresponding relation between a game state and a sound effect processing algorithm;
(12) and the mobile terminal determines a first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relation between the game state and the sound effect processing algorithm.
In the embodiment of the application, the corresponding relationship between the game state and the sound effect processing algorithm can be pre-established, when the sound effect processing algorithm corresponding to the first game role needs to be determined, the mobile terminal obtains the corresponding relationship between the game state and the sound effect processing algorithm, and determines the first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relationship between the game state and the sound effect processing algorithm. The corresponding relation between the game state and the sound effect processing algorithm can be stored in a nonvolatile memory of the mobile terminal in advance.
Optionally, the game state includes a game character blood volume value, and the step (11) of obtaining, by the mobile terminal, a correspondence between the game state and the sound effect processing algorithm may specifically include the following steps:
(111) the method comprises the steps that a mobile terminal obtains a corresponding relation between a blood volume interval of a game role and a sound effect processing algorithm;
(112) the mobile terminal obtains the blood volume value of the game role of the first game role, and determines the blood volume interval of the first game role in which the blood volume value of the game role of the first game role falls.
The step (12) of the mobile terminal determining the first sound effect processing algorithm corresponding to the game state of the first game character according to the corresponding relationship between the game state and the sound effect processing algorithm may specifically include the following steps:
(121) the mobile terminal determines a first sound effect processing algorithm corresponding to the first game character blood volume interval according to the corresponding relation between the game character blood volume interval and the sound effect processing algorithm.
In the embodiment of the present application, the game state will be described by taking a specific game character blood volume value as an example. The method and the device determine and distinguish different sound effect processing algorithms according to the game blood volume interval instead of determining different sound effect processing algorithms according to different game blood volume values, when the game role blood volume value of the first game role is obtained, the first game role blood volume interval in which the game role blood volume value of the first game role falls can be determined, the first sound effect processing algorithm corresponding to the first game role blood volume interval is determined according to the corresponding relation between the game role blood volume interval and the sound effect processing algorithms, the sound effect processing algorithms are simple and effective, the complexity of the sound effect processing algorithms can be reduced, and the sound effect processing efficiency of the game roles is improved.
Optionally, the game state includes a blood volume change rate of the game character, and the step (11) of obtaining a corresponding relationship between the game state and the sound effect processing algorithm may specifically include the following steps:
(113) the method comprises the steps that a mobile terminal obtains a corresponding relation between a blood volume change rate interval of a game role and a sound effect processing algorithm;
(114) the mobile terminal acquires the blood volume change rate of the game role of the first game role, and determines a first game role blood volume change rate interval in which the blood volume change rate of the game role of the first game role falls.
The step (12) of the mobile terminal determining the first sound effect processing algorithm corresponding to the game state of the first game character according to the corresponding relationship between the game state and the sound effect processing algorithm may specifically include the following steps:
(122) and the mobile terminal determines a first sound effect processing algorithm corresponding to the first game character blood volume change rate interval according to the corresponding relation between the game character blood volume change rate interval and the sound effect processing algorithm.
In the embodiment of the present application, the game state will be described by taking a specific game character blood volume change rate as an example. According to the method, the different sound effect processing algorithms are determined and distinguished according to the game blood volume change rate interval instead of determining the different sound effect processing algorithms according to the different game blood volume change rates, when the game role blood volume change rate of the first game role is obtained, the first game role blood volume change rate interval in which the game role blood volume change rate of the first game role falls can be determined, the first sound effect processing algorithm corresponding to the first game role blood volume change rate interval is determined according to the corresponding relation between the game role blood volume change rate interval and the sound effect processing algorithms, the sound effect processing algorithms are simple and effective, the complexity of the sound effect processing algorithms can be reduced, and the sound effect processing efficiency of the game roles is improved.
Optionally, the game state includes a game role blood volume value and a game role blood volume change rate, and the step (11) of obtaining a corresponding relationship between the game state and the sound effect processing algorithm may specifically include the following steps:
(115) the mobile terminal acquires the corresponding relation between the game role blood volume value interval, the game role blood volume change rate interval and the sound effect processing algorithm;
(116) the mobile terminal acquires the game role blood volume value of the first game role and the game role blood volume change rate of the first game role, and determines a first game state interval in which the game role blood volume value of the first game role and the game role blood volume change rate of the first game role fall.
The step (12) of the mobile terminal determining the first sound effect processing algorithm corresponding to the game state of the first game character according to the corresponding relationship between the game state and the sound effect processing algorithm may specifically include the following steps:
(123) and the mobile terminal determines a first sound effect processing algorithm corresponding to the first game state interval according to the corresponding relation between the game state interval and the sound effect processing algorithm.
According to the method and the device, the sound effect processing algorithm corresponding to the first game role can be determined by simultaneously considering the blood volume value and the blood volume value change rate of the game role, the sound effect processing algorithm corresponding to the first game role can be accurately determined, and therefore the sound effect of the first game role is improved.
And 104, the mobile terminal processes the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
In the embodiment of the present application, the mobile terminal processes the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role, including:
the mobile terminal processes the audio of the first game role according to the target position, the position of the first game role and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
Wherein, the first sound effect processing algorithm may be a reverberation sound effect algorithm. As shown in fig. 2. Fig. 2 is a schematic diagram of an analog transmission of an audio signal according to an embodiment of the disclosure. The audio signal generated by the audio playing end (the first game character) in fig. 2 can reach the audio receiving end (the target game character) through direct and reflected modes, so that a reverberation effect is formed at the audio receiving end. Two reflection paths are illustrated in fig. 2, a first reflection path reaching the audio receiving end via a primary reflection and a second reflection path reaching the audio receiving end via a primary reflection. Fig. 2 is merely an example of audio signal transmission, and the audio signal may reach the audio receiving end through 1, 2, and more than 2 reflection paths. The number of reflections and the reflected path are different depending on the game scene. Whether the audio signal is direct or reflected, it will have some attenuation, and the attenuation coefficient is determined according to the distance of the path, the number of reflections, the transmission medium, and the material of the reflection point. As shown in fig. 2, after the audio signal sent by the first game character reaches the target position of the target game character through three paths, a reverberation effect is formed at the target position of the target game character, where P = S1 × R1+ S2 × R2+ S3 × R3, where S1 is an attenuation coefficient of the first reflection path, S2 is an attenuation coefficient of the second reflection path, S3 is an attenuation coefficient of the direct path, R1 is a first initial audio signal transmitted along the first reflection path, R2 is a second initial audio signal transmitted along the second reflection path, and R3 is a third initial audio signal transmitted along the direct path. The first reflection path passes through a first reflection surface of the game scene, S1 is related to a material of the first reflection surface, a default propagation medium in the game scene, and a path length of the first reflection path, and the second reflection path passes through a second reflection surface of the game scene, S2 is related to a material of the second reflection surface, a default propagation medium in the game scene, and a path length of the second reflection path. S3 relates to the default propagation medium in the game scene and the length of the direct path. R1, R2 and R3 are associated with the spatial distribution of the sound field of the audio signal emitted by the first game character in the real three-dimensional space. When the material of the first reflecting surface and the default propagation medium in the game scene are determined, the larger the path length of the first reflecting path is, the smaller S1 is; when the material of the second reflecting surface and the default propagation medium in the game scene are determined, the larger the path length of the second reflecting path is, the smaller S2 is; in the case where the default propagation medium is determined in the game scene, the greater the length of the direct path, the smaller S3 is.
Optionally, step 104 may include the steps of:
the mobile terminal processes the audio of the first game role according to the target position, the scene parameters of the game scene and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
In the embodiment of the present application, the scene parameters of the game scene may include an area of the game scene, a set material in the game scene, and an air medium parameter in the game scene. The scene parameters of the game scene are used to determine the calculation parameters in the first sound effect processing algorithm.
The sound effect processing algorithm can be further optimized by considering the scene parameters of the game scene, different sound effect processing algorithms can be determined according to different game scenes, and the game sound effect is further improved.
It should be noted that, in the above embodiments, the first game character is taken as an example to determine the first sound effect corresponding to the first game character. The sound effect processing method of other game characters in the N game characters may refer to the sound effect processing algorithm of the first game character, which is not described herein again.
In the embodiment of the application, the sound effect of the first game role relative to the target game role can be determined, the sound effects of other surrounding game roles can be presented as far as possible in the angle of the target game role, the sound effects of other game roles can be determined according to the game states of other surrounding game roles, and the sound effects of the game roles are enriched.
Referring to fig. 3, fig. 3 is a schematic flow chart of another sound effect processing method for a game character according to an embodiment of the present application, and as shown in fig. 3, the sound effect processing method for a game character includes the following steps.
301, the mobile terminal obtains a target position of a target game character in a game scene.
302, the mobile terminal determines N game characters except the target game character within a preset distance range from the target position in the game scene.
303, the mobile terminal determines a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, wherein the first game role is any one of the N game roles.
304, the mobile terminal obtains the audio of the first game role in a preset time period in advance.
In the embodiment of the application, the mobile terminal acquires the audio of the first game role in the preset time period in advance. The mobile terminal may obtain the audio of the first game character within a preset time period from the audio file of the game application. The preset time period may be a period of time after the current time elapses. The preset time period may be preset and stored in a memory (non-volatile memory) of the mobile terminal. For example, the preset time period may be a time period (e.g., 30 seconds) after 5 seconds from the current time.
305, the mobile terminal processes the audio of the first game role according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
The specific implementation of steps 301 to 303 in the embodiment of the present application may refer to steps 101 to 103 shown in fig. 1, and the specific implementation of step 305 in the embodiment of the present application may refer to step 104 shown in fig. 1, which is not described herein again.
And 306, the mobile terminal outputs the first sound effect within a preset time period.
Because the audio of the first game character in the audio file of the game application corresponds to the game picture, when the first game character in the game picture starts to sound, the audio of the first game character in the audio file is correspondingly played. Because the processing of the sound effect needs time, the audio of the first game role can be extracted in advance and the sound effect processing is carried out, so that the first sound effect corresponding to the first game role is obtained. The mobile terminal outputs the first sound effect within a preset time period, and the first sound effect can be played when the first game role starts to sound in a subsequent game picture, so that the game picture and the sound effect are synchronized.
The mobile terminal may include at least two speakers, and the first sound effect may be output through the at least two speakers within a preset time period, so that the first sound effect may generate a reverberation effect.
In the embodiment of the application, the sound effect of the first game role relative to the target game role can be determined, the sound effects of other surrounding game roles can be presented as far as possible in the angle of the target game role, the sound effects of other game roles can be determined according to the game states of other surrounding game roles, and the sound effects of the game roles are enriched. The audio of the first game role can be obtained in advance to carry out sound effect processing, so that the synchronization of subsequent game pictures and sound effects is ensured, and the compatibility of the game pictures and the sound effects is improved.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the mobile terminal includes hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present invention can be implemented in hardware or a combination of hardware and computer software, with the exemplary elements and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiment of the present application, the mobile terminal may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a sound effect processing device for a game character according to an embodiment of the present application. As shown in fig. 4, the sound-effect processing apparatus 400 for a game character comprises a first obtaining unit 401, a first determining unit 402, a second determining unit 403, and a sound-effect processing unit 404, wherein:
a first obtaining unit 401, configured to obtain a target position of a target game character in a game scene;
a first determining unit 402, configured to determine N game characters in a game scene, except for a target game character, within a preset distance range from a target position;
a second determining unit 403, configured to determine, based on a game state of a first game character, a first sound effect processing algorithm corresponding to the first game character, where the first game character is any one of the N game characters;
the sound effect processing unit 404 is configured to process the audio of the first game character according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game character.
Optionally, the second determining unit 403 determines, based on the game state of the first game character, a first sound effect processing algorithm corresponding to the first game character, specifically: acquiring a corresponding relation between a game state and a sound effect processing algorithm; and determining a first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relation between the game state and the sound effect processing algorithm.
Optionally, the game state includes a blood volume value of the game character, and the second determining unit 403 obtains a corresponding relationship between the game state and the sound effect processing algorithm, specifically: acquiring a corresponding relation between the blood volume interval of the game role and a sound effect processing algorithm; obtaining a game role blood volume value of a first game role, and determining a first game role blood volume interval in which the game role blood volume value of the first game role falls;
the second determining unit 403 determines the first sound effect processing algorithm corresponding to the game state of the first game character according to the corresponding relationship between the game state and the sound effect processing algorithm, specifically: and determining a first sound effect processing algorithm corresponding to the first game character blood volume interval according to the corresponding relation between the game character blood volume interval and the sound effect processing algorithm.
Optionally, the game state includes a blood volume change rate of the game character, and the second determining unit 403 obtains a corresponding relationship between the game state and the sound effect processing algorithm, specifically: acquiring a corresponding relation between a blood volume change rate interval of the game role and a sound effect processing algorithm; obtaining the blood volume change rate of the game role of the first game role, and determining the blood volume change rate interval of the first game role in which the blood volume change rate of the game role of the first game role falls;
the second determining unit 403 determines the first sound effect processing algorithm corresponding to the game state of the first game character according to the corresponding relationship between the game state and the sound effect processing algorithm, specifically: and determining a first sound effect processing algorithm corresponding to the first game character blood volume change rate interval according to the corresponding relation between the game character blood volume change rate interval and the sound effect processing algorithm.
Optionally, the sound effect processing unit 404 processes the audio of the first game character according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game character, specifically: and processing the audio of the first game role according to the target position, the scene parameters of the game scene and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role.
Optionally, the sound effect processing apparatus 400 for a game character may further include a second obtaining unit 405, where:
the second obtaining unit 405 is configured to obtain the audio of the first game character within a preset time period in advance before the audio processing unit 404 processes the audio of the first game character according to the target position and the first audio processing algorithm to obtain the first audio corresponding to the first game character.
Optionally, the sound effect processing apparatus 400 for game characters further comprises an output unit 406, wherein:
the output unit 406 is configured to output a first sound effect within a preset time period after the sound effect processing unit 404 processes the audio of the first game character according to the target position and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game character.
By implementing the sound effect processing device of the game character shown in fig. 4, the sound effect of the first game character relative to the target game character can be determined, the sound effects of other peripheral game characters can be presented as much as possible in the angle of the target game character, the sound effects of other game characters can be determined according to the game states of other peripheral game characters, and the sound effects of the game characters can be enriched.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure. As shown in fig. 5, the mobile terminal 500 includes a processor 501 and a memory 502, wherein the mobile terminal 500 may further include a bus 503, the processor 501 and the memory 502 may be connected to each other through the bus 503, and the bus 503 may be a Peripheral Component Interconnect (PCI) bus, an Enhanced Industrial Standard Architecture (EISA) bus, or the like. The bus 503 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus. The mobile terminal 500 may also include input and output devices 504, where the input and output devices 504 may include a display screen, such as a liquid crystal display screen. Memory 502 is used to store one or more programs containing instructions; processor 501 is configured to call instructions stored in memory 502 to perform some or all of the method steps described above with respect to fig. 1-3.
By implementing the mobile terminal shown in fig. 5, the sound effect of the first game character relative to the target game character can be determined, the sound effects of other surrounding game characters can be presented as much as possible in the angle of the target game character, the sound effects of other game characters can be determined according to the game states of other surrounding game characters, and the sound effects of the game characters can be enriched.
As shown in fig. 6, for convenience of description, only the parts related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiments of the present application. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 6 is a block diagram illustrating a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present disclosure. Referring to fig. 6, the handset includes: a Radio Frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a Wireless Fidelity (WiFi) module 970, a processor 980, and a power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 6 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 6:
RF circuitry 910 may be used for the reception and transmission of information. In general, RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a fingerprint recognition module 931 and other input devices 932. Fingerprint identification module 931, can gather the fingerprint data of user above it. The input unit 930 may include other input devices 932 in addition to the fingerprint recognition module 931. In particular, other input devices 932 may include, but are not limited to, one or more of a touch screen, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 940 may include a Display screen 941, and optionally, the Display screen 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, pressure sensor, temperature sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor (also referred to as a light sensor) that can adjust the backlight brightness of the mobile phone according to the brightness of ambient light, and thus adjust the brightness of the display screen 941, and a proximity sensor that can turn off the display screen 941 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications (such as horizontal and vertical screen switching, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile phone, and other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like which can be configured for the mobile phone are not described herein again.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and the audio signal is converted by the speaker 961 to be played; microphone 962, on the other hand, converts the collected sound signals into electrical signals, which are received by audio circuit 960 and converted into audio data, which are processed by processor 980, via RF circuit 910 for transmission to, for example, another cell phone, or played to memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 6 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Alternatively, processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, which may preferably be logically connected to the processor 980 via a power management system, such that the power management system may manage charging, discharging, and power consumption.
The mobile phone may further include a camera 9100, and the camera 9100 is used for shooting images and videos and transmitting the shot images and videos to the processor 980 for processing.
The mobile phone can also be a Bluetooth module and the like, which are not described in detail herein.
In the embodiments shown in fig. 1 to fig. 3, the method flow of each step can be implemented based on the structure of the mobile phone.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the game character sound effect processing methods described in the above method embodiments.
Embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to execute some or all of the steps of any of the game character sound effect processing methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing embodiments of the present invention have been described in detail, and the principles and embodiments of the present invention are explained herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A sound effect processing method for a game character is characterized by comprising the following steps:
acquiring a target position of a target game role in a game scene;
determining N game roles except the target game role within a preset distance range from the target position in the game scene; the preset distance range is determined according to the game scene;
determining a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, wherein the first sound effect processing algorithm comprises the following steps: acquiring a corresponding relation between a game state and a sound effect processing algorithm; determining a first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relation between the game state and the sound effect processing algorithm; the first game character is any one of the N game characters; wherein, different game states correspond to different sound effect processing algorithms; the game state is a state in which the first game role exists in real time in the current game scene;
processing the audio of the first game role according to the target position, the scene parameters of the game scene and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role relative to the target game role, wherein the scene parameters of the game scene comprise: the area of the game scene, the material of the set in the game scene and the air medium parameter in the game scene.
2. The method of claim 1, wherein the game state comprises a blood value of a game character, and the obtaining the corresponding relationship between the game state and the sound effect processing algorithm comprises:
acquiring a corresponding relation between the blood volume interval of the game role and a sound effect processing algorithm;
obtaining a game role blood volume value of the first game role, and determining a first game role blood volume interval in which the game role blood volume value of the first game role falls;
the determining a first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relation between the game state and the sound effect processing algorithm comprises the following steps:
and determining a first sound effect processing algorithm corresponding to the first game character blood volume interval according to the corresponding relation between the game character blood volume interval and the sound effect processing algorithm.
3. The method of claim 1, wherein the game state comprises a blood volume change rate of the game character, and the obtaining the corresponding relationship between the game state and the sound effect processing algorithm comprises:
acquiring a corresponding relation between a blood volume change rate interval of the game role and a sound effect processing algorithm;
obtaining the blood volume change rate of the game role of the first game role, and determining a first game role blood volume change rate interval in which the blood volume change rate of the game role of the first game role falls;
the determining a first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relation between the game state and the sound effect processing algorithm comprises the following steps:
and determining a first sound effect processing algorithm corresponding to the first game character blood volume change rate interval according to the corresponding relation between the game character blood volume change rate interval and the sound effect processing algorithm.
4. The method according to any one of claims 1 to 3, wherein before the audio of the first game character is processed according to the target position, the scene parameters of the game scene, and the first sound effect processing algorithm to obtain the first sound effect corresponding to the first game character relative to the target game character, the method further comprises:
and acquiring the audio of the first game role in a preset time period in advance.
5. The method of claim 4, wherein after the audio of the first game character is processed according to the target position, the scene parameters of the game scene, and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game character relative to the target game character, the method further comprises:
and outputting the first sound effect within the preset time period.
6. A sound effect processing apparatus for a game character, comprising:
the game system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target position of a target game role in a game scene;
the first determining unit is used for determining N game characters except the target game character within a preset distance range from the target position in the game scene; the preset distance range is determined according to the game scene;
the second determination unit is used for determining a first sound effect processing algorithm corresponding to a first game role based on the game state of the first game role, and comprises: acquiring a corresponding relation between a game state and a sound effect processing algorithm; determining a first sound effect processing algorithm corresponding to the game state of the first game role according to the corresponding relation between the game state and the sound effect processing algorithm; the first game character is any one of the N game characters; wherein, different game states correspond to different sound effect processing algorithms; the game state is a state in which the first game role exists in real time in the current game scene;
the sound effect processing unit is used for processing the audio frequency of the first game role according to the target position, the scene parameters of the game scene and the first sound effect processing algorithm to obtain a first sound effect corresponding to the first game role relative to the target game role, wherein the scene parameters of the game scene comprise: the area of the game scene, the material of the set in the game scene and the air medium parameter in the game scene.
7. A mobile terminal comprising a processor and a memory for storing one or more programs configured for execution by the processor, the programs comprising instructions for performing the method of any of claims 1-5.
8. A computer-readable storage medium for storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN201811315169.0A 2018-11-06 2018-11-06 Game role sound effect processing method and device, mobile terminal and storage medium Active CN109529335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811315169.0A CN109529335B (en) 2018-11-06 2018-11-06 Game role sound effect processing method and device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811315169.0A CN109529335B (en) 2018-11-06 2018-11-06 Game role sound effect processing method and device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN109529335A CN109529335A (en) 2019-03-29
CN109529335B true CN109529335B (en) 2022-05-20

Family

ID=65846088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811315169.0A Active CN109529335B (en) 2018-11-06 2018-11-06 Game role sound effect processing method and device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN109529335B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119547B (en) * 2019-04-28 2021-07-30 腾讯科技(深圳)有限公司 Method, device and control equipment for predicting group war victory or defeat
CN110371051B (en) * 2019-07-22 2021-06-04 广州小鹏汽车科技有限公司 Prompt tone playing method and device for vehicle-mounted entertainment
CN117579979B (en) * 2024-01-15 2024-04-19 深圳瑞利声学技术股份有限公司 Game panoramic sound generation method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104841131A (en) * 2014-02-18 2015-08-19 腾讯科技(深圳)有限公司 Audio frequency control method and apparatus
CN107115672A (en) * 2016-02-24 2017-09-01 网易(杭州)网络有限公司 Gaming audio resource player method, device and games system
CN108355356A (en) * 2018-03-14 2018-08-03 网易(杭州)网络有限公司 Scene of game sound intermediate frequency control method for playing back and device
CN108536419A (en) * 2018-03-28 2018-09-14 努比亚技术有限公司 A kind of game volume control method, equipment and computer readable storage medium
CN108597530A (en) * 2018-02-09 2018-09-28 腾讯科技(深圳)有限公司 Sound reproducing method and device, storage medium and electronic device
CN108579084A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for information display, device, equipment in virtual environment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001009157A (en) * 1999-06-30 2001-01-16 Konami Co Ltd Control method for video game, video game device and medium recording program of video game allowing reading by computer
US10821361B2 (en) * 2016-11-03 2020-11-03 Bragi GmbH Gaming with earpiece 3D audio
CN107179908B (en) * 2017-05-16 2020-07-07 网易(杭州)网络有限公司 Sound effect adjusting method and device, electronic equipment and computer readable storage medium
CN108465241B (en) * 2018-02-12 2021-05-04 网易(杭州)网络有限公司 Game sound reverberation processing method and device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104841131A (en) * 2014-02-18 2015-08-19 腾讯科技(深圳)有限公司 Audio frequency control method and apparatus
CN107115672A (en) * 2016-02-24 2017-09-01 网易(杭州)网络有限公司 Gaming audio resource player method, device and games system
CN108597530A (en) * 2018-02-09 2018-09-28 腾讯科技(深圳)有限公司 Sound reproducing method and device, storage medium and electronic device
CN108355356A (en) * 2018-03-14 2018-08-03 网易(杭州)网络有限公司 Scene of game sound intermediate frequency control method for playing back and device
CN108536419A (en) * 2018-03-28 2018-09-14 努比亚技术有限公司 A kind of game volume control method, equipment and computer readable storage medium
CN108579084A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for information display, device, equipment in virtual environment and storage medium

Also Published As

Publication number Publication date
CN109529335A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN111773696B (en) Virtual object display method, related device and storage medium
CN107341006B (en) Screen locking wallpaper recommendation method and related products
CN111182355B (en) Interaction method, special effect display method and related device
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN107707768B (en) Processing method for running game application and related product
US10235125B2 (en) Audio playback control method, and terminal device
CN107197146B (en) Image processing method and device, mobile terminal and computer readable storage medium
CN107967129B (en) Display control method and related product
CN109550248B (en) Virtual object position identification method and device, mobile terminal and storage medium
CN109529335B (en) Game role sound effect processing method and device, mobile terminal and storage medium
CN110633067B (en) Sound effect parameter adjusting method and mobile terminal
CN108646973B (en) Off-screen display method, mobile terminal and computer-readable storage medium
CN110245601B (en) Eyeball tracking method and related product
CN108595000B (en) Screen brightness adjusting method and device
CN113440840B (en) Interaction method and related device
US20160066119A1 (en) Sound effect processing method and device thereof
CN109126120B (en) Motor control method and related product
CN108108137B (en) Display control method and related product
CN110955510B (en) Isolation processing method and related device
CN108600887B (en) Touch control method based on wireless earphone and related product
CN107754316B (en) Information exchange processing method and mobile terminal
CN107562303B (en) Method and device for controlling element motion in display interface
CN106506834B (en) Method, terminal and system for adding background sound in call
CN109587552B (en) Video character sound effect processing method and device, mobile terminal and storage medium
CN111158624A (en) Application sharing method, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant