CN112612445A - Audio playing method and device - Google Patents

Audio playing method and device Download PDF

Info

Publication number
CN112612445A
CN112612445A CN202011589470.8A CN202011589470A CN112612445A CN 112612445 A CN112612445 A CN 112612445A CN 202011589470 A CN202011589470 A CN 202011589470A CN 112612445 A CN112612445 A CN 112612445A
Authority
CN
China
Prior art keywords
virtual
information
user
scene
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011589470.8A
Other languages
Chinese (zh)
Inventor
孙丹青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011589470.8A priority Critical patent/CN112612445A/en
Publication of CN112612445A publication Critical patent/CN112612445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Abstract

The application discloses an audio playing method and device, and belongs to the technical field of computers. The method comprises the following steps: receiving the movement state information of the first user, which is sent by the earphone and measured by a displacement sensor of the earphone; determining first virtual position information of a first virtual role in a virtual scene according to the mobile state information and the current position information of the first virtual role corresponding to the first user in the virtual scene; receiving second virtual position information of a second virtual role corresponding to at least one second user in a virtual scene; acquiring sound to be emitted by a second virtual character in a virtual scene; and adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information. According to the method and the device, the moving state of the user is monitored by using the displacement sensor in the earphone, the relative position relation of the virtual role corresponding to each user in the virtual scene is judged to adjust the playing mode of sound in the earphone, and the immersive experience in the sense of hearing is realized.

Description

Audio playing method and device
Technical Field
The application belongs to the technical field of computers, and particularly relates to an audio playing method and device.
Background
With the development of virtual reality technology, virtual scenes simulated by virtual equipment bring more and more vivid immersive experience to users.
Currently, multi-person virtual scenes are enjoyed by users in order to enhance the visually immersive experience of the users. By wearing the virtual equipment, the user can see the peripheral virtual scene and the virtual characters corresponding to other users in the same virtual scene. The virtual character corresponding to other users can be approached or separated along with the actions of turning the head, walking and the like. And, the user can hear the preset virtual sound through the earphone.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
although the user visually generates the immersive experience in the above manner, the virtual sounds of the virtual characters corresponding to other users that can be heard by the user through the earphones are preset, and even if the user moves around in the real environment, the sounds in the virtual environment are played in the preset manner, and the immersive experience in the auditory sense cannot be well realized.
Disclosure of Invention
The embodiment of the application aims to provide an audio playing method and an audio playing device, and the problem that in the prior art, under a multi-user virtual scene, the sound effect generated in an earphone cannot change along with the change of the moving state between users can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an audio playing method, where the method includes:
receiving the movement state information of the first user sent by the earphone; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
determining first virtual position information of a first virtual role in a virtual scene according to the movement state information and the current position information of the first virtual role corresponding to the first user in the virtual scene;
receiving second virtual position information of a second virtual role corresponding to at least one second user in the virtual scene;
acquiring sound to be emitted by the second virtual character in the virtual scene;
and adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
In a second aspect, an embodiment of the present application provides an audio playing apparatus, including:
the first receiving module is used for receiving the mobile state information of the first user, which is sent by the earphone; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
a determining module, configured to determine, according to the movement state information and current location information of a first virtual character corresponding to the first user in a virtual scene, first virtual location information of the first virtual character in the virtual scene;
a second receiving module, configured to receive second virtual location information of a second virtual role corresponding to at least one second user in the virtual scene;
the acquisition module is used for acquiring the sound to be emitted by the second virtual role in the virtual scene;
and the adjusting module is used for adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the audio playing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the steps of the audio playing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to operatePrograms or instructionsThe audio playing method according to the first aspect is implemented.
According to the embodiment of the invention, the earphone provided with the displacement sensor is provided, so that the movement state information of the first user can be measured by using the displacement sensor in a multi-user virtual scene, and after the mobile terminal receives the movement state information of the first user sent by the earphone, the first virtual position information of the first virtual character in the virtual scene can be determined according to the movement state information and the current position information of the first virtual character corresponding to the first user in the virtual scene. And the mobile terminal also receives second virtual position information of a second virtual character corresponding to at least one second user in the virtual scene, and after acquiring the sound to be emitted by the second virtual character in the virtual scene, the mobile terminal can adjust the playing mode of the sound of the second virtual character in the earphone according to the first virtual position information and the second virtual position information. In the invention, the sound of the virtual roles corresponding to other users emitted from the user earphone can change along with the change of the relative positions between the virtual role of the user and the virtual roles of each other user, so that the user can personally hear the sound emitted by the virtual roles corresponding to other users in a virtual scene, and the auditory immersive experience of the user in a multi-user virtual scene is enhanced.
Drawings
FIG. 1 is a flow chart of an audio playing method of the present invention;
FIG. 2 is a flow chart of another audio playback method of the present invention;
FIG. 3 is a schematic diagram of a position relationship between virtual characters in a multi-person virtual scene according to the present invention;
FIG. 4 is a block diagram of an audio player according to the present invention;
FIG. 5 is a block diagram of an electronic device of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail the audio playing method provided by the embodiment of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of an audio playing method of the present invention is shown. The method comprises the following steps:
step 101: receiving the movement state information of the first user sent by the earphone; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
the scene applied by the embodiment of the invention is a multi-person virtual AR scene. After each user wears the AR eyeshade and the earphones, the mobile terminal is erected on the AR eyeshade, and light in a screen of the mobile terminal is projected into the eyes of the user virtually through lenses of the AR eyeshade so that the user can see virtual scene pictures. The earphone is connected with the mobile terminal, and sound effect is output by utilizing the two earphones, so that a user can hear the sound of the virtual scene. And the virtual roles corresponding to the users are in the same virtual scene. The virtual scene is a game scene, a virtual mall scene and the like, and the real user trades commodity objects in the virtual mall by virtual characters.
In the embodiment of the invention, in order to ensure that a user can hear the sound effect change condition in a virtual scene in an immersive manner, the two headsets of the provided earphone are both provided with the displacement sensors, the displacement sensors can be gyroscopes, and the earphone can be a Bluetooth earphone.
It should be noted that, when a user just wears a virtual device and enters a virtual scene, the virtual device may perform initial positioning on the position of the user in the real scene and the position of the virtual character corresponding to the user in the virtual scene, and when a subsequent posture change occurs to the user, the mobile state information of the user in the real scene and the virtual position information of the corresponding virtual character in the virtual scene are determined based on the initial positioning that is positioned in advance.
The movement state information refers to various posture change information of each user, and may be walking information of the user, such as walking around the user, head rotation information of the first user, such as a turn-around action performed by the user, walking information and head rotation information of the user, such as a turn-around action performed by the user during walking.
Specifically, a displacement sensor in the headset monitors the movement state information of the first user in real time, and when the user walks, rotates the head or rotates the head while walking in a real scene, the headset sends the movement state information measured by the displacement sensor to the mobile terminal.
Then, for the first user, the headset worn by the first user transmits the movement state information of the first user to the mobile terminal used by the first user.
Step 102: determining first virtual position information of a first virtual role in a virtual scene according to the movement state information and the current position information of the first virtual role corresponding to the first user in the virtual scene;
in the embodiment of the invention, after the virtual AR scene is constructed, if the movement state information of the first user is generated for the first time, the current position information is the initialized position information of the first virtual role corresponding to the first user in the virtual scene. If the movement state information of the first user is not generated for the first time, the current position information is the virtual position information of the first virtual character after the posture change in the virtual scene occurs for the previous time.
The person skilled in the art can preset a corresponding relationship table between the movement state information in the real scene and the movement state information in the virtual scene according to actual requirements. For example, if the user moves three steps forward in the real scene, the user may be set to move thirty meters forward in the virtual scene. In the real scene, the head of the user rotates 30 degrees from north to east, and then the head of the user in the virtual scene can also rotate 30 degrees from north to east.
Specifically, after the first user generates the movement state information in the real scene, the earphone sends the movement state information to the mobile terminal, the mobile terminal determines the movement state information corresponding to the first virtual character corresponding to the first user in the virtual scene by querying the correspondence table, and then the first virtual position information of the first virtual character in the virtual scene can be determined based on the current position information of the first virtual character corresponding to the first user in the virtual scene. The first virtual position information refers to position information of the first virtual character after the posture of the first virtual character is changed in the virtual scene.
Step 103: receiving second virtual position information of a second virtual role corresponding to at least one second user in the virtual scene;
in the embodiment of the invention, all users in the same virtual scene can determine the position information of the corresponding virtual character in the virtual scene by adopting the steps 101-102. For example, an earphone worn by the second user sends the mobile state information of the second user to a mobile terminal used by the second user, and the mobile terminal of the second user determines the second virtual position information of the second virtual character in the virtual scene according to the mobile state information and the current position information of the second virtual character corresponding to the second user in the virtual scene.
It can be understood that the earphones worn by other second users except the first user also monitor the moving state information of the respective users in the real scene in real time, and once the posture change is monitored, the virtual position information of each virtual character corresponding to each user in the virtual scene is determined by adopting the steps 101 to 102, and the virtual position information is sent to the mobile terminal of the first user.
Step 104: acquiring sound to be emitted by the second virtual character in the virtual scene;
because there are multiple users in the same virtual scene, and there may be interactions among the users, each user may make a sound in the virtual scene when moving and speaking, for example, when multiple users cooperate to attack an enemy in a game scene, users who are teammates may communicate with each other strategic tactics, etc., so that each user may make a sound in the scene. In the embodiment of the application, for a certain first user, the sound to be emitted in the virtual scene by the second virtual role corresponding to each of the other users is acquired.
Step 105: and adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
In the embodiment of the present invention, the adjusting of the playing mode of the sound in the earphone may be adjusting the volume output by the mobile terminal, or adjusting the volume of the left and right channels of the earphone respectively.
In order to ensure that the first user can hear sounds emitted by virtual characters corresponding to other users in the virtual scene in person, after the mobile terminals of the other users send second virtual position information of the corresponding second virtual characters in the virtual scene to the mobile terminal of the first user, the mobile terminal of the first user can acquire the sounds to be emitted by the second virtual characters in the virtual scene, so that the mobile terminal of the first user can flexibly adjust the playing mode of the sounds emitted by the second virtual characters in the earphone of the first user according to the relative position relationship between the first virtual character and the other second virtual characters in the virtual scene.
It should be noted that, in the embodiment of the present application, it is not limited which user is the first user, nor is it limited which user is the second user. For any one user, the user is the first user, and the users other than the first user are the second users.
According to the embodiment of the invention, the earphone provided with the displacement sensor is provided, so that the movement state information of the first user can be measured by using the displacement sensor in a multi-user virtual scene, and after the mobile terminal receives the movement state information of the first user sent by the earphone, the first virtual position information of the first virtual character in the virtual scene can be determined according to the movement state information and the current position information of the first virtual character corresponding to the first user in the virtual scene. And the mobile terminal also receives second virtual position information of a second virtual character corresponding to at least one second user in the virtual scene, and after acquiring the sound to be emitted by the second virtual character in the virtual scene, the mobile terminal can adjust the playing mode of the sound emitted by the second virtual character in the earphone according to the first virtual position information and the second virtual position information. In the invention, the sound of the virtual roles corresponding to other users emitted from the user earphone can change along with the change of the relative positions between the virtual role of the user and the virtual roles of each other user, so that the user can personally hear the sound emitted by the virtual roles corresponding to other users in a virtual scene, and the auditory immersive experience of the user in a multi-user virtual scene is enhanced.
Referring to fig. 2, a flow chart of another audio playing method of the present invention is shown. The method comprises the following steps:
step 201: receiving first deflection angle information and first position coordinate information of a first user, which are sent by an earphone; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
specifically, the first deflection angle information includes: first deflection angle size information and first deflection angle direction information of each of the left and right ears.
When the head of a user rotates, displacement sensors built in left and right earphones of the earphone can respectively measure the deflection angle and the deflection angle direction of the left and right ears of the user. For example, when the head of the user rotates 90 ° from north to east, the displacement sensor of the headset of the right ear of the user measures that the deflection angle information of the right ear of the user is 90 °, and the deflection angle direction information is from east to south. And the displacement sensor of the headset of the left ear of the user measures that the deflection angle information of the left ear of the user is 90 degrees, and the deflection angle direction information is rotated from northwest to north. At this time, the information of the deflection angle direction and the information of the deflection angle size of the left ear and the right ear both belong to the first deflection angle information. Moreover, since the user does not move, the first position coordinate information is the initial standing position information of the user in the real scene or the standing position information after the previous movement, which depends on the situation of the user.
When the user moves, displacement sensors arranged in left and right earphones of the earphone can measure displacement movement information of the user. Because the head of the user does not rotate in the moving process, the displacement moving information measured by the left ear is the same as the displacement moving information measured by the right ear. In this case, when the first position coordinate information is transmitted, the displacement movement information measured by any one of the ears may be selected. The first deflection angle information of the user is initial deflection angle information of two ears of the user in a real scene, or deflection angle information of two ears after head rotation occurs last time.
When the head of the user rotates in the moving process, the left headset and the right headset of the earphone can measure the deflection angle information and the position coordinate information of the two ears respectively. At this time, the first position coordinate information selects displacement movement information measured by any one of the displacement sensors, and the first deflection angle information includes: the left ear deflection angle information, the left ear deflection angle direction information, the right ear deflection angle information and the right ear deflection angle direction information.
In the embodiment of the invention, in order to improve the accuracy of the measured first deflection angle information and the first position coordinate information of the first user, the detection can be performed by combining a gyroscope in the AR eye patch. The method specifically adopts the following steps: the method comprises the steps of obtaining first deflection angle information and first position information of a first user according to a displacement sensor arranged in an earphone, and calibrating the first deflection angle information and the first position information according to a gyroscope arranged in an AR eye cover to obtain calibrated first deflection angle information and first position coordinate information. At this time, the earphone sends the calibrated first deflection angle information and the calibrated first position coordinate information to the mobile terminal of the first user.
It should be noted that the step 201 may be a further defined step of the step 101.
Step 202: determining first virtual position coordinate information and first virtual deflection angle information of a first virtual character in a virtual scene according to the first deflection angle information, the first position coordinate information and current position information of the first virtual character corresponding to the first user in the virtual scene;
in the embodiment of the present invention, the current location information includes: current deflection angle information and current position coordinate information. The current deflection angle information is the current deflection angle size information and the current deflection angle direction information of the left ear and the right ear of the first virtual character in the virtual scene. Likewise, the first virtual deflection angle information includes: first virtual yaw angle size information and first virtual yaw angle direction information of each of the left and right ears.
Specifically, the respective first virtual deflection angle size information of the left ear and the right ear is determined according to the respective first deflection angle size information of the left ear and the right ear and the respective current deflection angle size information of the left ear and the right ear in the virtual scene by referring to the movement state information corresponding relation table between the real scene and the virtual scene; determining respective first virtual deflection angle direction information of the left ear and the right ear according to the respective first deflection angle direction information of the left ear and the right ear and the respective current deflection angle direction information of the left ear and the right ear in the virtual scene; and determining the first virtual position coordinate information of the first virtual role according to the first position coordinate information of the first user and the current position coordinate information of the first virtual role. Briefly, through step 402, it is determined where the first avatar of the first user is located in the virtual scene and the respective orientations of the two ears after the first avatar has been correspondingly posed in the virtual scene.
It should be noted that step 202 may be a further defined step of step 102.
Step 203: receiving second virtual position information of a second virtual role corresponding to at least one second user in the virtual scene;
the second virtual position information is virtual position coordinate information of the second virtual character in the virtual scene. And the mobile terminals of the second users except the first user send the virtual position coordinate information of the corresponding second virtual roles in the virtual scene to the mobile terminal of the first user.
The mobile terminal used by the second user transmits the second virtual location information of the second user, and the principle of obtaining the second virtual location information refers to step 202 and step 203.
Step 204: acquiring sound to be emitted by the second virtual character in the virtual scene;
when a game is played in a multi-player virtual VR scene, users are often required to communicate game tactics through voice communication, and the second virtual character also generates sound when walking, so that the speaking sound of the users and the walking sound of the virtual character, or the sound generated by the virtual character touching an object in the virtual scene and operating equipment can belong to the sound generated by the second virtual character in the virtual scene.
Optionally, the step 204 may include acquiring a speaking voice sent by the second electronic device; the speaking voice is collected by a microphone of the second electronic equipment; and taking the speaking voice as the voice to be sent out by the second virtual character in the virtual scene.
In order to ensure that the first user can hear the speaking sound of other users in the virtual scene in person, the microphone of each user mobile terminal collects the speaking sound of each user, and sends the collected speaking sound to the mobile terminal of the first user as the sound to be emitted by the corresponding second virtual character in the virtual scene, so that the mobile terminal of the first user can adjust the volume of the speaking sound emitted by other users in the earphone of the first user according to the relative position relationship between the first virtual character and other second virtual characters in the virtual scene. Therefore, the sound emitted by other users in reality can be played to the first user according to the position of the first user in the virtual scene, so that the first user can hear the real speaking sound of other users more personally on the scene. In the embodiment of the present invention, after the microphone of the second electronic device collects the speaking voice of the second user, the second electronic device may obtain the character type of the second virtual character corresponding to the second user, adjust the collected speaking voice to the virtual character voice corresponding to the character type of the second virtual character according to the character type, and then send the virtual character voice to the first electronic device of the first user by the second electronic device. For example, if the type of the character corresponding to the second user is female, the collected speaking voice is adjusted to the tone of female.
Optionally, the step 204 may include obtaining clothing information of the second virtual character; acquiring walking sounds to be sent corresponding to the second virtual character from a preset sound database according to the clothing information;
a sound database is preset in the mobile terminal, and a corresponding relation table of the clothing information and the walking sound is stored in the sound database. For example, a virtual character may have a "click" walking sound when wearing high-heeled shoes, and may have no walking sound when wearing athletic shoes. Those skilled in the art can set the corresponding relationship between the walking sound and the road material on which the virtual character steps according to actual requirements, for example, the sound of a high-heeled shoe stepping on a wooden bottom surface can be set to be different from that of a stone ground. The invention is not limited in this regard. Thus, the sound emitted by the virtual character can be more suitable for the clothes of the character.
Step 205: determining the virtual direction and the virtual position of the second virtual character relative to the first virtual character according to the first virtual position coordinate information, the first virtual deflection angle information and the second virtual position information;
in the embodiment of the present invention, the virtual direction of the second virtual character relative to the first virtual character is characterized as the orientation of the second virtual character relative to the first virtual character, for example: front, rear, left and right. The virtual position of the second virtual character relative to the first virtual character is characterized by how close the second virtual character is to the first virtual character.
Specifically, the first virtual position coordinate information can represent the position coordinate of the first virtual character corresponding to the first user in the virtual scene, and the distance between the second virtual character and the first virtual character can be determined according to the first virtual position coordinate information and the second virtual position coordinate information. The first virtual deflection angle information can represent the magnitude and direction of the deflection angle of the left ear and the magnitude and direction of the deflection angle of the right ear, namely the face orientation information of the first virtual character can be obtained according to the first virtual deflection angle information, and the orientation of the second virtual character relative to the first virtual character can be determined according to the face orientation information of the first virtual character and the position of the second virtual character.
Step 206: and adjusting the playing mode of the sound in the earphone according to the virtual direction and the virtual position.
For example, if the second avatar is positioned in front of the face of the first avatar, the volume of the sound produced by the second avatar may be turned up. And if the second virtual character is positioned behind the face of the first virtual character, turning down the volume of the sound emitted by the second virtual character. And if the second virtual character is close to the first virtual character, turning up the volume of the sound emitted by the second virtual character. And if the second virtual character is far away from the first virtual character, turning down the volume of the sound emitted by the second virtual character.
In practical application, the volume corresponding to the virtual direction and the virtual position can be determined according to the preset distance between the two objects and the corresponding relationship between the virtual direction and the volume of the two objects, and then the volume output by the mobile terminal is adjusted. Certainly, in practical application, for the earphone, there are two left and right channels, a preset distance between two objects may be pre-constructed, a corresponding relationship between a virtual direction of the two objects and a volume of the left channel and a volume of the right channel is respectively established, a volume of the left channel and a volume of the right channel corresponding to the virtual direction and the virtual position are determined, and then a volume of the left channel and a volume of the right channel output to the left channel are respectively adjusted.
In the embodiment of the invention, the virtual sounds emitted by various virtual objects in the virtual scene, such as flame combustion, river flow and the like, can be adjusted in a similar manner as described above. That is, after step 202, further comprising: acquiring third virtual position information of each virtual object in the virtual scene; determining the virtual direction and the virtual position of each virtual object relative to the first virtual role according to the first virtual position coordinate information, the first virtual deflection angle information and the third virtual position information; and adjusting the volume of the virtual sound of each virtual object emitted in the earphone according to the virtual direction and the virtual position.
Specifically, for the volume emitted by each virtual object in the virtual scene, the adjustment manner of the virtual sound of the virtual object can still be determined by determining the relative position relationship between the virtual character of the first user in the virtual scene and each virtual object. And the third virtual position information is the virtual position coordinate information of the virtual object in the virtual scene. The sound source type and the reference volume of each virtual object in the virtual scene can be preset by those skilled in the art. For example, for a small flame in a virtual scene, the type of the sound source may be set to be flame burning sound, and the reference volume may be set to be smaller burning sound. When the virtual character moves towards the small flame, the combustion sound is amplified according to the relative position relation between the virtual character and the small flame.
For a better understanding of embodiments of the present invention, reference is made to the following exemplary description, taken in conjunction with FIG. 3 of the accompanying drawings:
fig. 3 is a schematic diagram illustrating a position relationship between virtual characters in a multi-person virtual scene. The virtual character corresponding to the user A is the virtual character corresponding to the user B, the virtual character corresponding to the user C is the virtual character corresponding to the user C, the virtual character corresponding to the user D is the virtual character corresponding to the user D, the virtual character corresponding to the user D is the virtual character corresponding to the virtual character, and the virtual objects in the virtual scene are the small flames and the stream.
In a real scene, a user B, a user C and a user D are assumed to be respectively positioned in front of, behind and to the left of a user A. If the user A moves to the user B, the headset of the two ears of the user A continuously sends the first deflection angle information and the first position coordinate information of the user A, which are measured by the displacement sensor, to the mobile terminal of the user A. At this time, the mobile terminal of the "a" user may also continuously determine the first virtual position coordinate information and the first virtual deflection angle information of the "a" virtual human in the virtual scene according to the foregoing step 202. That is, the "a" avatar will appear in the virtual scene to move forward in the direction of the "b" avatar. And in the process that the 'A' virtual person advances to the 'B' virtual person, the mobile terminal of the 'A' user can also continuously receive respective virtual position information of the 'B' user mobile terminal, the 'C' user mobile terminal, the 'B' virtual person, the 'C' virtual person and the 'D' virtual person in the virtual scene sent by the 'D' user mobile terminal, and the 'A' user mobile terminal can also locally inquire the virtual position information of the 'small flame' and the 'stream' in the virtual scene. Meanwhile, the mobile terminal of the 'A' user can also acquire the sound emitted by the virtual roles and the virtual objects corresponding to other users in the virtual scene. Then, according to step 205, the relative position relationship between the virtual human being "a" and the virtual human and virtual object of the other user is determined, and the volume of the virtual human and virtual object of the other user emitted in the headset of the "a" user is adjusted. That is, as the virtual human A gets closer to the virtual human B, the virtual human A gets closer to the virtual human B and gets closer to the virtual human D and the small flame, and gets farther from the virtual human C and the large flame, then, on the way that the 'A' virtual person approaches the 'B' virtual person, the speaking voice of the 'B' user heard in the two headsets becomes larger and larger, the speaking voice of the 'D' user heard from the headset on the corresponding side is smaller and larger, the speaking voice of the 'small flame' heard from the headset on the corresponding side is smaller and larger, the speaking voice of the 'stream' heard from the headset on the corresponding side is smaller and smaller, and the speaking voice of the 'C' user is smaller and smaller heard from the two headsets. If the virtual person B advances towards the virtual person A in the process that the virtual person A advances towards the virtual person B, the volume of the speaking voice of the user B heard in the two headsets of the virtual person A is increased more rapidly, and the step voice of the virtual person B is heard to be larger and larger. Similarly, for the sound change heard by the respective headsets of the user b, the user c and the user d, the relative position relationship between the virtual character corresponding to the user b and the virtual character corresponding to the user c in the virtual scene and the virtual object changes in real time, so that the users can hear the sound emitted by the virtual character and the virtual object in the virtual scene in an immersive manner.
According to the embodiment of the invention, the earphone provided with the displacement sensor is provided, so that the displacement sensor can be utilized to monitor the first deflection angle information and the first position coordinate information of the first user in a multi-user virtual scene, and after the mobile terminal receives the first deflection angle information and the first position coordinate information of the first user sent by the earphone, the first virtual position coordinate information and the first virtual deflection angle information of the first virtual character in the virtual scene can be determined according to the first deflection angle information, the first position coordinate information and the current position information of the first virtual character corresponding to the first user in the virtual scene. And the mobile terminal also receives second virtual position information of a second virtual character corresponding to at least one second user in the virtual scene, and after acquiring the sound to be emitted by the second virtual character in the virtual scene, the mobile terminal can determine the virtual direction and the virtual position of the second virtual character relative to the first virtual character according to the first virtual position coordinate information, the first virtual deflection angle information and the second virtual position information, and further adjust the playing mode of the sound in the earphone according to the virtual direction and the virtual position. In the invention, the sound of the virtual roles corresponding to other users emitted from the earphone of the user can change along with the change of the relative positions between the virtual role of the user and the virtual roles of each other user, so that the user can personally hear the sound emitted by the virtual roles corresponding to other users in the virtual scene, and the auditory immersive experience of the user in the multi-user virtual scene is enhanced.
It should be noted that, in the audio playing method provided in the embodiment of the present application, the execution main body may be an audio playing device, or a control module used for executing the audio playing method in the audio playing device. The embodiment of the present application takes an audio playing device executing an audio playing method as an example, and describes an audio playing device provided in the embodiment of the present application.
Referring to fig. 4, a block diagram of an audio playback apparatus 400 according to the present invention is shown. The device includes:
a first receiving module 401, configured to receive the mobile status information of the first user sent by the headset; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
a determining module 402, configured to determine, according to the movement state information and current location information of a first virtual character corresponding to the first user in a virtual scene, first virtual location information of the first virtual character in the virtual scene;
a second receiving module 403, configured to receive second virtual location information of a second virtual role corresponding to at least one second user in the virtual scene;
an obtaining module 404, configured to obtain a sound to be emitted by the second virtual character in the virtual scene;
an adjusting module 405, configured to adjust a playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
Optionally, the obtaining module 404 includes:
the first acquisition module is used for acquiring the speaking voice sent by the second electronic equipment; the speaking voice is collected by a microphone of the second electronic equipment; and taking the speaking voice as the voice to be sent out by the second virtual character in the virtual scene.
Optionally, the obtaining module 404 includes:
the second acquisition module is used for acquiring the clothing information of the second virtual character; and acquiring the walking sound to be sent corresponding to the second virtual character from a preset sound database according to the clothing information.
Optionally, the first receiving module 401 is specifically configured to receive first deflection angle information and first position coordinate information of the first user, which are sent by the headset.
Optionally, the determining module 402 is specifically configured to determine, according to the first deflection angle information, the first position coordinate information, and current position information of a first virtual character corresponding to the first user in a virtual scene, first virtual position coordinate information and first virtual deflection angle information of the first virtual character in the virtual scene.
Optionally, the adjusting module 405 includes:
the determining submodule is used for determining the virtual direction and the virtual position of the second virtual character relative to the first virtual character according to the first virtual position coordinate information, the first virtual deflection angle information and the second virtual position information;
and the adjusting submodule is used for adjusting the playing mode of the sound in the earphone according to the virtual direction and the virtual position.
According to the embodiment of the invention, the earphone provided with the displacement sensor is provided, so that the movement state information of the first user can be monitored by using the displacement sensor in a multi-user virtual scene, and after the mobile terminal receives the movement state information of the first user sent by the earphone, the first virtual position information of the first virtual character in the virtual scene can be determined according to the movement state information and the current position information of the first virtual character corresponding to the first user in the virtual scene. And the mobile terminal also receives second virtual position information of a second virtual character corresponding to at least one second user in the virtual scene, and after acquiring the sound to be emitted by the second virtual character in the virtual scene, the mobile terminal can adjust the playing mode of the sound of the second virtual character in the earphone according to the first virtual position information and the second virtual position information. In the invention, the sound of the virtual roles corresponding to other users emitted from the earphone of the user can change along with the change of the relative positions between the virtual role of the user and the virtual roles of each other user, so that the user can personally hear the sound emitted by the virtual roles corresponding to other users in the virtual scene, and the auditory immersive experience of the user in the multi-user virtual scene is enhanced.
The audio playing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The audio playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The audio playing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device M00 is further provided in this embodiment of the present application, and includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and executable on the processor M01, where the program or the instruction when executed by the processor M01 implements each process of the foregoing audio playing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 6001, a network unit 6002, an audio output unit 6003, an input unit 6004, a sensor 6005, a display unit 6006, a user input unit 6007, an interface unit 6008, a memory 6009, and a processor 6010.
Those skilled in the art will appreciate that the electronic device 600 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 6010 through a power management system, so that the functions of managing charging, discharging, and power consumption are implemented through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The radio frequency unit 6001 is configured to receive the mobile state information of the first user, which is sent by the earphone;
a processor 6010, configured to receive the mobile status information sent by the radio frequency unit 6001; determining first virtual position information of a first virtual role in a virtual scene according to the movement state information and the current position information of the first virtual role corresponding to the first user in the virtual scene; receiving second virtual position information of a second virtual role corresponding to at least one second user in the virtual scene; acquiring sound to be emitted by the second virtual character in the virtual scene; and adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
Optionally, the processor 6010 is further configured to acquire a speaking voice sent by the second electronic device; the speaking voice is collected by a microphone of the second electronic equipment; and taking the speaking voice as the voice to be sent out by the second virtual character in the virtual scene.
Optionally, the processor 6010 is further configured to obtain clothing information of the second virtual character; and acquiring the walking sound to be sent corresponding to the second virtual character from a preset sound database according to the clothing information.
Optionally, the processor 6010 is further configured to receive first deflection angle information and first position coordinate information of the first user sent by the headset; and determining first virtual position coordinate information and first virtual deflection angle information of the first virtual character in the virtual scene according to the first deflection angle information, the first position coordinate information and current position information of the first virtual character corresponding to the first user in the virtual scene.
Optionally, the processor 6010 is further configured to determine, according to the first virtual position coordinate information, the first virtual deflection angle information, and the second virtual position information, a virtual direction and a virtual position of the second virtual character with respect to the first virtual character; and adjusting the playing mode of the sound in the earphone according to the virtual direction and the virtual position.
According to the embodiment of the invention, the earphone provided with the displacement sensor is provided, so that the movement state information of the first user can be monitored by using the displacement sensor in a multi-user virtual scene, and after the mobile terminal receives the movement state information of the first user sent by the earphone, the first virtual position information of the first virtual character in the virtual scene can be determined according to the movement state information and the current position information of the first virtual character corresponding to the first user in the virtual scene. And the mobile terminal also receives second virtual position information of a second virtual character corresponding to at least one second user in the virtual scene, and after acquiring the sound to be emitted by the second virtual character in the virtual scene, the mobile terminal can adjust the playing mode of the sound of the second virtual character in the earphone according to the first virtual position information and the second virtual position information. In the invention, the sound of the virtual roles corresponding to other users emitted from the earphone of the user can change along with the change of the relative positions between the virtual role of the user and the virtual roles of each other user, so that the user can personally hear the sound emitted by the virtual roles corresponding to other users in the virtual scene, and the auditory immersive experience of the user in the multi-user virtual scene is enhanced.
It should be understood that in this embodiment of the application, the input Unit 6004 may include a Graphics Processing Unit (GPU) 80041 and a microphone 60042, and the Graphics processor 60041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 6006 may include a display panel 60061, and the display panel 60061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 6007 includes a touch panel 60071 and other input devices 60072. A touch panel 60071, also referred to as a touch screen. The touch panel 60071 may include two parts of a touch detection device and a touch controller. Other input devices 60072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 6009 can be used to store software programs and a variety of data, including but not limited to applications and operating systems. Processor 6010 may integrate an application processor that handles primarily the operating system, user interfaces, application programs, etc. and a modem processor that handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 6010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned audio playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned audio playing method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An audio playing method, comprising:
receiving the movement state information of the first user sent by the earphone; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
determining first virtual position information of a first virtual role in a virtual scene according to the movement state information and the current position information of the first virtual role corresponding to the first user in the virtual scene;
receiving second virtual position information of a second virtual role corresponding to at least one second user in the virtual scene;
acquiring sound to be emitted by the second virtual character in the virtual scene;
and adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
2. The method of claim 1, wherein the obtaining the sound to be emitted by the second avatar in the virtual scene comprises:
acquiring speaking voice sent by second electronic equipment; the speaking voice is collected by a microphone of the second electronic equipment;
and taking the speaking voice as the voice to be sent out by the second virtual character in the virtual scene.
3. The method of claim 1, wherein the obtaining the sound to be emitted by the second avatar in the virtual scene comprises:
acquiring clothing information of the second virtual character;
and acquiring the walking sound to be sent corresponding to the second virtual character from a preset sound database according to the clothing information.
4. The method of claim 1, wherein the receiving the mobility state information of the first user sent by the headset comprises:
receiving first deflection angle information and first position coordinate information of a first user, which are sent by an earphone;
the determining, according to the movement state information and current position information of a first virtual character corresponding to the first user in a virtual scene, first virtual position information of the first virtual character in the virtual scene includes:
and determining first virtual position coordinate information and first virtual deflection angle information of the first virtual character in the virtual scene according to the first deflection angle information, the first position coordinate information and current position information of the first virtual character corresponding to the first user in the virtual scene.
5. The method of claim 4, wherein the adjusting the playing manner of the sound in the earphone according to the second virtual position information and the first virtual position information comprises:
determining the virtual direction and the virtual position of the second virtual character relative to the first virtual character according to the first virtual position coordinate information, the first virtual deflection angle information and the second virtual position information;
and adjusting the playing mode of the sound in the earphone according to the virtual direction and the virtual position.
6. An audio playback apparatus, comprising:
the first receiving module is used for receiving the mobile state information of the first user, which is sent by the earphone; the earphone is provided with a displacement sensor, and the movement state information is measured by the displacement sensor;
a determining module, configured to determine, according to the movement state information and current location information of a first virtual character corresponding to the first user in a virtual scene, first virtual location information of the first virtual character in the virtual scene;
a second receiving module, configured to receive second virtual location information of a second virtual role corresponding to at least one second user in the virtual scene;
the acquisition module is used for acquiring the sound to be emitted by the second virtual role in the virtual scene;
and the adjusting module is used for adjusting the playing mode of the sound in the earphone according to the second virtual position information and the first virtual position information.
7. The apparatus of claim 6, wherein the obtaining module comprises:
the first acquisition module is used for acquiring the speaking voice sent by the second electronic equipment; the speaking voice is collected by a microphone of the second electronic equipment; and taking the speaking voice as the voice to be sent out by the second virtual character in the virtual scene.
8. The apparatus of claim 6, wherein the obtaining module comprises:
the second acquisition module is used for acquiring the clothing information of the second virtual character; and acquiring the walking sound to be sent corresponding to the second virtual character from a preset sound database according to the clothing information.
9. The apparatus of claim 6,
the first receiving module is specifically configured to receive first deflection angle information and first position coordinate information of a first user, which are sent by an earphone;
the determining module is specifically configured to determine, according to the first deflection angle information, the first position coordinate information, and current position information of a first virtual character corresponding to the first user in a virtual scene, first virtual position coordinate information and first virtual deflection angle information of the first virtual character in the virtual scene.
10. The apparatus of claim 6, wherein the adjustment module comprises:
the determining submodule is used for determining the virtual direction and the virtual position of the second virtual character relative to the first virtual character according to the first virtual position coordinate information, the first virtual deflection angle information and the second virtual position information;
and the adjusting submodule is used for adjusting the playing mode of the sound in the earphone according to the virtual direction and the virtual position.
CN202011589470.8A 2020-12-28 2020-12-28 Audio playing method and device Pending CN112612445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011589470.8A CN112612445A (en) 2020-12-28 2020-12-28 Audio playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011589470.8A CN112612445A (en) 2020-12-28 2020-12-28 Audio playing method and device

Publications (1)

Publication Number Publication Date
CN112612445A true CN112612445A (en) 2021-04-06

Family

ID=75248581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011589470.8A Pending CN112612445A (en) 2020-12-28 2020-12-28 Audio playing method and device

Country Status (1)

Country Link
CN (1) CN112612445A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113163293A (en) * 2021-05-08 2021-07-23 苏州触达信息技术有限公司 Environment sound simulation system and method based on wireless intelligent earphone
CN113766383A (en) * 2021-09-08 2021-12-07 度小满科技(北京)有限公司 Method and device for controlling earphone to mute
WO2022227421A1 (en) * 2021-04-26 2022-11-03 深圳市慧鲤科技有限公司 Method, apparatus, and device for playing back sound, storage medium, computer program, and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004267433A (en) * 2003-03-07 2004-09-30 Namco Ltd Information processor, server, program, recording medium for providing voice chat function
CN106023983A (en) * 2016-04-27 2016-10-12 广东欧珀移动通信有限公司 Multi-user voice interaction method and device based on virtual reality scene
CN107534824A (en) * 2015-05-18 2018-01-02 索尼公司 Message processing device, information processing method and program
CN109582273A (en) * 2018-11-26 2019-04-05 联想(北京)有限公司 Audio-frequency inputting method, electronic equipment and audio output device
CN111142665A (en) * 2019-12-27 2020-05-12 恒玄科技(上海)股份有限公司 Stereo processing method and system of earphone assembly and earphone assembly

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004267433A (en) * 2003-03-07 2004-09-30 Namco Ltd Information processor, server, program, recording medium for providing voice chat function
CN107534824A (en) * 2015-05-18 2018-01-02 索尼公司 Message processing device, information processing method and program
CN106023983A (en) * 2016-04-27 2016-10-12 广东欧珀移动通信有限公司 Multi-user voice interaction method and device based on virtual reality scene
CN109582273A (en) * 2018-11-26 2019-04-05 联想(北京)有限公司 Audio-frequency inputting method, electronic equipment and audio output device
CN111142665A (en) * 2019-12-27 2020-05-12 恒玄科技(上海)股份有限公司 Stereo processing method and system of earphone assembly and earphone assembly

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022227421A1 (en) * 2021-04-26 2022-11-03 深圳市慧鲤科技有限公司 Method, apparatus, and device for playing back sound, storage medium, computer program, and program product
CN113163293A (en) * 2021-05-08 2021-07-23 苏州触达信息技术有限公司 Environment sound simulation system and method based on wireless intelligent earphone
CN113766383A (en) * 2021-09-08 2021-12-07 度小满科技(北京)有限公司 Method and device for controlling earphone to mute

Similar Documents

Publication Publication Date Title
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
US20200316473A1 (en) Virtual object control method and apparatus, computer device, and storage medium
CN112612445A (en) Audio playing method and device
EP3265864B1 (en) Tracking system for head mounted display
JP7121805B2 (en) Virtual item adjustment method and its device, terminal and computer program
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
US20210312695A1 (en) Hair rendering method, device, electronic apparatus, and storage medium
WO2019109778A1 (en) Method, device, and terminal for showing result of game round
CN110141857A (en) Facial display methods, device, equipment and the storage medium of virtual role
CN111589142A (en) Virtual object control method, device, equipment and medium
WO2022134980A1 (en) Control method and apparatus for virtual object, terminal, and storage medium
US11032537B2 (en) Movable display for viewing and interacting with computer generated environments
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN106104361A (en) The head mounted display eyeshade being used together with mobile computing device
CN111603770B (en) Virtual environment picture display method, device, equipment and medium
WO2020233403A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
WO2019184782A1 (en) Method for controlling object in virtual scene, device, and computer apparatus
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN113559495B (en) Method, device, equipment and storage medium for releasing skill of virtual object
WO2022227915A1 (en) Method and apparatus for displaying position marks, and device and storage medium
CN112156465A (en) Virtual character display method, device, equipment and medium
CN113244616A (en) Interaction method, device and equipment based on virtual scene and readable storage medium
CN112604274B (en) Virtual object display method, device, terminal and storage medium
CN112367533B (en) Interactive service processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination