CN108319439A - A kind of sound method of adjustment and device - Google Patents

A kind of sound method of adjustment and device Download PDF

Info

Publication number
CN108319439A
CN108319439A CN201710035979.XA CN201710035979A CN108319439A CN 108319439 A CN108319439 A CN 108319439A CN 201710035979 A CN201710035979 A CN 201710035979A CN 108319439 A CN108319439 A CN 108319439A
Authority
CN
China
Prior art keywords
user
sound
scenes
sound source
left ear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710035979.XA
Other languages
Chinese (zh)
Inventor
张宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201710035979.XA priority Critical patent/CN108319439A/en
Publication of CN108319439A publication Critical patent/CN108319439A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention discloses a kind of sound method of adjustment, the method includes:Obtain user's head deflection angle in actual scene;According to user location and the user's head deflection angle in actual scene in sound source position, the VR/AR scenes in predetermined Virtual Reality/augmented reality/AR scenes, determine sound source to the left ear of user distance and sound source to user's auris dextra distance;According to the distance of the sound source to the left ear of user and the sound source to the distance of user's auris dextra, determine that the left ear of user receives the parameter of the parameter and user's auris dextra reception sound of sound respectively;The parameter of the parameter and user's auris dextra reception sound of sound is received according to the left ear of the user, the parameter for controlling the sound that the sound source is exported to the left ear of user and user's auris dextra respectively, to adjust separately the sound that the left ear of the user receives and the sound that user's auris dextra receives.The embodiment of the present invention further simultaneously discloses a kind of sound adjusting apparatus.

Description

A kind of sound method of adjustment and device
Technical field
The present invention relates to technical field of virtual reality more particularly to a kind of sound method of adjustment and devices.
Background technology
Now, with virtual reality (VR, Virtual Reality) equipment/augmented reality (AR, Augmented Reality) the rapid development of equipment, improving the performance of VR/AR equipment becomes a kind of development trend.
User is immersed in the scene when using VR/AR equipment, and the direction position of user will be with user movement and send out Changing, it is this to change the variation for bringing sound direction of transfer, for example, when user is located at the front of sound source in the scene originally, So, when user original place rotates 180 °, sound source is located in user rear, in this way, the direction change in location of user brings user With the variation of sound source relative position so that the sound transmitted in front of user originally has been reformed into be transmitted from the rear of user Sound.
However, in the prior art, the direction of transfer of the sound source position of various VR/AR equipment Scenes is fixed on the market Constant, this can cause user's feeling of immersion weaker, and user experience is weaker.
Invention content
In view of this, an embodiment of the present invention is intended to provide a kind of sound method of adjustment and devices so that user can receive To the sound of suitable self-position, user's feeling of immersion and user experience are improved.
In order to achieve the above objectives, the technical proposal of the invention is realized in this way:
In a first aspect, the embodiment of the present invention provides a kind of sound method of adjustment, including:User's head is obtained in actual scene Middle deflection angle;According in sound source position, the VR/AR scenes in predetermined Virtual Reality/augmented reality AR scenes User location and the user's head deflection angle in actual scene determine sound source to the distance and sound source of the left ear of user To the distance of user's auris dextra;According to the distance of the sound source to the left ear of user and the sound source to the distance of user's auris dextra, distinguish Determine that the left ear of user receives the parameter of the parameter and user's auris dextra reception sound of sound;Sound is received according to the left ear of the user Parameter and user's auris dextra receive the parameter of sound, control the sound source respectively and export to the left ear of user and user's auris dextra The parameter of sound, to adjust separately the sound that the left ear of the user receives and the sound that user's auris dextra receives.
Further, described according to user position in sound source position, the VR/AR scenes in predetermined VR/AR scenes It sets and user's head deflection angle in actual scene, determines sound source to the distance and sound source of the left ear of user to user The distance of auris dextra, including:According to user location and the user's head in the predetermined VR/AR scenes in actual scene Middle deflection angle determines in the VR/AR scenes user's right ear position in the left ear position of user and the VR/AR scenes;Root According to the left ear position of user in sound source position in the predetermined VR/AR scenes and the VR/AR scenes, the sound is determined Source to the left ear of user distance;According to user in sound source position in the predetermined VR/AR scenes and the VR/AR scenes Right ear position determines the sound source to the distance of user's auris dextra.
Further, it is described according to user location and the user's head in the predetermined VR/AR in actual field Deflection angle in scape determines in the VR/AR scenes user's right ear position in the left ear position of user and the VR/AR scenes, Including:It is inclined in actual scene according to angle before user's head deflection in predetermined VR/AR scenes and the user's head Gyration determines in VR/AR scenes angle after user's head deflection;According to user in the predetermined VR/AR scenes Angle after user's head deflection, determines the left ear position of user and institute in the VR/AR scenes in position and the VR/AR scenes State user's right ear position in VR/AR scenes.
Further, deflection angle of the user's head in actual scene be the user's head in the horizontal plane Deflection angle.
Further, the left ear of the user receives the parameter of sound or the parameter of user's auris dextra reception sound includes:Loudness Value, and/or lag duration.
Second aspect, the embodiment of the present invention provide a kind of sound adjusting apparatus, including:Acquisition module, for obtaining user Head deflection angle in actual scene;First determining module, for according to predetermined Virtual Reality/augmented reality/ User location and the user's head deflection angle in actual scene in sound source position, the VR/AR scenes in AR scenes, Determine sound source to the left ear of user distance and sound source to user's auris dextra distance;Second determining module, for according to the sound Determine that the left ear of user receives the parameter of sound respectively to the distance of the left ear of user and the sound source to the distance of user's auris dextra in source The parameter of sound is received with user's auris dextra;Module is adjusted, for receiving the parameter of sound and the use according to the left ear of the user Family auris dextra receives the parameter of sound, controls the parameter for the sound that the sound source is exported to the left ear of user and user's auris dextra respectively, with Adjust separately the sound that the left ear of the user receives and the sound that user's auris dextra receives.
Further, the first determining module, including:First determination sub-module, for according to the predetermined VR/AR User location and the user's head deflection angle in actual scene in scene determine that user is left in the VR/AR scenes User's right ear position in ear position and the VR/AR scenes;Second determination sub-module, for according to the predetermined VR/ The left ear position of user in sound source position and the VR/AR scenes, determines the sound source to the distance of the left ear of user in AR scenes; Third determination sub-module, for according to user in sound source position in the predetermined VR/AR scenes and the VR/AR scenes Right ear position determines the sound source to the distance of user's auris dextra.
Further, the first determination sub-module is specifically used for being deflected according to user's head in predetermined VR/AR scenes Preceding angle and the user's head deflection angle in actual scene determine in VR/AR scenes angle after user's head deflection; According to angle after user's head deflection in user location in the predetermined VR/AR scenes and the VR/AR scenes, determine Go out in the VR/AR scenes user's right ear position in the left ear position of user and the VR/AR scenes.
Further, deflection angle of the user's head in actual scene be the user's head in the horizontal plane Deflection angle.
Further, the left ear of the user receives the parameter of sound or the parameter of user's auris dextra reception sound includes:Loudness Value, and/or lag duration.
The sound method of adjustment and device that the embodiment of the present invention is provided obtain user's head in actual scene first Then deflection angle according to user location in sound source position, VR/AR scenes in predetermined VR/AR scenes and uses account Portion's deflection angle in actual scene, determine sound source to the left ear of user distance and sound source to user's auris dextra distance, also Obtained after user's head deflects sound source to the left ear of user distance and sound source to user's auris dextra distance, according to sound source To the left ear of user distance and sound source to the distance of user's auris dextra, determine that the left ear of user receives the parameter and use of sound respectively Family auris dextra receives the parameter of sound, finally, the parameter of the parameter and user's auris dextra reception sound of sound is received according to the left ear of user, Control sound source exports the parameter of the sound to the left ear of user and user's auris dextra respectively, to adjust separately the sound that the left ear of user receives The sound that sound and user's auris dextra receive;That is, the embodiment of the present invention is inclined in actual scene by obtaining user's head Gyration to redefine out position of the sound source with respect to user left ear and user's auris dextra, and then controls sound source and is used with reaching adjustment The purpose for the sound that the sound and user's auris dextra that the left ear in family receives receive, in this way so that user, which can receive, to be suitble to certainly The sound of body position, to improve user's feeling of immersion and user experience.
Description of the drawings
Fig. 1 is the flow diagram of sound method of adjustment in the embodiment of the present invention;
Fig. 2 is a kind of optional flow diagram of sound method of adjustment in the embodiment of the present invention;
Fig. 3 is a kind of optional schematic diagram of VR/AR scenes in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of sound adjusting apparatus in the embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes.
The embodiment of the present invention provides a kind of sound method of adjustment, and this method can be applied in VR/AR equipment, and Fig. 1 is this The flow diagram of sound method of adjustment in inventive embodiments, as shown in Figure 1, this method includes:
S101:Obtain user's head deflection angle in actual scene;
Specifically, when user uses VR/AR equipment, the user's head in VR/AR scenes is immersed in actual scene In can deflect, to generate above-mentioned user's head deflection angle in actual scene.
In practical applications, motion sensor is built-in in VR/AR equipment to track the rotation of user's head, to obtain User's head deflection angle in actual scene.
Wherein, deflection angle of the above-mentioned user's head in actual scene is the deflection angle of user's head in the horizontal plane Degree.For example, in actual scene, it can be by the way that a coordinate system for being parallel to horizontal plane be arranged, user's head is in actual scene In deflection angle can be user's head ears line and axis of abscissas or user's head ears line and axis of ordinates it Between angle.
S102:According to user location in sound source position, VR/AR scenes in predetermined VR/AR scenes and use account Portion's deflection angle in actual scene, determine sound source to the left ear of user distance and sound source to user's auris dextra distance;
Specifically, a three-dimensional system of coordinate can be set in VR/AR scenes, then determine that sound source is sat in the three-dimensional Position coordinates in mark system, and determine position coordinates of the user in the three-dimensional system of coordinate, in this way, just knowing VR/AR User location in sound source position and VR/AR scenes in scape.
In order to determine sound source to the distance and sound source of the left ear of user to the distance of user's auris dextra, Fig. 2 is that the present invention is implemented A kind of optional flow diagram of sound method of adjustment in example, refering to what is shown in Fig. 2, in a kind of optional embodiment, S102 May include:
S201:According to user location and the user's head deflection angle in actual scene in predetermined VR/AR scenes, Determine in VR/AR scenes user's right ear position in the left ear position of user and VR/AR scenes;
Here, in practical applications, can according to the empirical value of ears distance, by user location in VR/AR scenes come Determine the position of user or so ear in VR/AR scenes, then according to determine in VR/AR scenes the position of user or so ear and User's head deflection angle in actual scene determines that user is right in the left ear position of user and VR/AR scenes in VR/AR scenes Ear position.
Further, after user's head deflects, in order to determine the left ear of user in the VR/AR scenes after deflection User's right ear position in position and VR/AR scenes, in a kind of optional embodiment, S201 may include:
It is deflected in actual scene according to angle before user's head deflection in predetermined VR/AR scenes and user's head Angle determines in VR/AR scenes angle after user's head deflection;
Specifically, before user's head does not deflect, VR/AR equipment obtains and preserves user's head in actual field Angle in scape before not deflecting, the angle are the preceding angle of user's head deflection in VR/AR scenes;When user's head is sent out When raw deflection, VR/AR equipment obtains and preserves deflection angle of the user's head in actual scene when deflecting, the angle As user's head deflection angle in actual scene;Finally, the angle before user's head in VR/AR scenes being deflected and user Head deflection angle addition in actual scene can be obtained in VR/AR scenes angle after user's head deflection.
According to angle after user's head deflection in user location in predetermined VR/AR scenes and VR/AR scenes, determine Go out user's right ear position in the left ear position of user in VR/AR scenes and VR/AR scenes.
From the foregoing it will be appreciated that user location in predetermined VR/AR scenes can be passed through according to the empirical value of ears distance The position of user or so ear in VR/AR scenes before user's head deflection is determined, it is possible to before being deflected according to user's head Angle after user's head deflection in the position of the left ear of user and VR/AR scenes, determines to use in VR/AR scenes in VR/AR scenes The family positions Zuo Er;After can be according to user's head deflection in the position of user's auris dextra in VR/AR scenes before deflection and VR/AR scenes Angle determines user's right ear position in VR/AR scenes.
S202:According to the left ear position of user in sound source position in predetermined VR/AR scenes and VR/AR scenes, determine Go out sound source to the distance of the left ear of user;
S203:According to user's right ear position in sound source position in predetermined VR/AR scenes and VR/AR scenes, determine Go out sound source to the distance of user's auris dextra.
In above-mentioned S202 and above-mentioned S203, the left ear of user in sound source position and VR/AR scenes in determining VR/AR scenes In the case of position, it can determine that sound source to the distance of the left ear of user, is determining VR/AR according to the range formula of point-to-point transmission It can be determined according to the range formula of point-to-point transmission in the case of user's right ear position in sound source position and VR/AR scenes in scape Sound source to user's auris dextra distance.
S103:According to the distance of sound source to the left ear of user and sound source to the distance of user's auris dextra, a user left side is determined respectively Ear receives the parameter of the parameter and user's auris dextra reception sound of sound;
Wherein, the left ear of above-mentioned user receives the parameter of sound or the parameter of user's auris dextra reception sound includes:Loudness value, And/or lag duration.
Specifically, it after determining distance of the sound source to the left ear of user, can be calculated according to the loudness value of sound public Formula calculates the loudness value that the left ear of user receives sound, similarly, after determining distance of the sound source to the left ear of user, Ke Yiji Calculate the loudness value that user's auris dextra receives sound;And/or a user left side can be calculated according to the lag duration calculation formula of sound Ear receives the lag duration of sound, similarly, can calculate the lag duration that user's auris dextra receives sound.
S104:The parameter that the parameter and user's auris dextra reception sound of sound are received according to the left ear of user, controls sound source respectively It exports to the parameter of the left ear of user and the sound of user's auris dextra, to adjust separately the sound and user's auris dextra that the left ear of user receives The sound received.
Here, VR/AR equipment obtain the left ear of user receive sound parameter and user's auris dextra receive sound parameter it Afterwards, the sound that the left ear of user receives and the sound that user's auris dextra receives can be adjusted by controlling sound source, in this way so that The left and right ear of user can carry out real-time reception to sound appropriate with the deflection of user's head.
It is given an actual example below to be illustrated to one or more of the above sound method of adjustment embodiment.
Wherein, below using VR/AR scenes as scene of game, user wear AR equipment be glasses for, establish one with It is (default that family wearing spectacles central point extends the level coordinates system that 10cm is origin o to user's back side of head direction in the horizontal direction Origin o is the midpoint of user's Double-ear-hole line), Fig. 3 is the optional signal of one kind in the embodiment of the present invention in VR/AR scenes Figure, refering to what is shown in Fig. 3, the XY coordinates in the VR/AR scenes in three-dimensional coordinate are the projection of level coordinates on the ground, Z is sat Mark is perpendicular to horizontal plane.When there is thunder in scene of game, position that thunder and lightning occurs is the position of sound source, and sound is from occurring thunder The position of electricity is transferred to the direction of user or so ear as shown in figure 3, so, being based on above-mentioned VR/AR scenes, the sound method of adjustment Including:
Step A:Storage user's wearing spectacles start the initial position and sound source of user when scene of game in VR/AR scenes Then position utilizes the motion sensor built in VR equipment, tracks user's head rotation direction deflection angle.
Step B:Determine that three-dimensional coordinate of the sound source in VR/AR scenes is:(x1, y1, z1), user are in VR/AR scenes Three-dimensional coordinate be:(x2, y2, z2);Origin o (user's Double-ear-hole line midpoint) in preset level areal coordinate system is in VR/AR Three-dimensional coordinate in scene coordinate system is:(x2, y2, z2+1.65), wherein h is that the distance of user's sole to earhole plane is default For 165cm, i.e., 1.65 meters, and keep real-time update dynamic relative position;
Step C:Assuming that user's Double-ear-hole distance is 20cm (left and right earhole is 10cm i.e. 0.1 meter apart from origin o distances), Assuming that the initial angle of the preceding account of deflection is 0, then, when the deflection angle of user's head is θ;Then left ear is sat in earhole plane Mark system in coordinate be:
(0.1sin θ, 0.1cos θ)
It is possible to extrapolate coordinate of the left earhole of user in VR/AR scene three-dimensional system of coordinates and be:
(x2-0.1sin θ, y2-0.1cos θ, z2+1.65);
Similarly, can extrapolate coordinate of user's auris dextra hole in three coordinate systems of scene is:
(x2+0.1sin θ, y2+0.1cos θ, z2+1.65)
Sound source can be obtained to user or so earhole distance respectively according to point-to-point transmission air line distance formula in three-dimensional system of coordinate For L1, L2;
Step D:Determine that the loudness value formula for being transferred to user or so ear is as follows according to sound source to user or so ear distance:
Δ S1=S × (a/L12)
Δ S2=S × (a/L22)
Wherein, △ S1 are the loudness value of the left ear of user, and △ S2 are the loudness value of user's auris dextra, and default S is what sound source was sent out Loudness, a are attenuation coefficient;
Determine that the lag duration formula for being transferred to user or so ear is as follows according to sound source to user or so ear distance:
T1=L1/v
T2=L2/v
Wherein, t1 is the lag duration of the left ear of user, and t2 is the lag duration of the left ear of user, and v is sound in VR/AR scenes Spread speed in the medium;User can be received in actual life based on left and right ear after receiving sound by simulating people The volume and priority time arrived judges sound source position (people is in biocompatibility characteristics);
Step E:According to loudness △ S1, the △ after lag duration t1, the t2 for being transferred to user or so ear obtained and decaying S2, output left and right power of hearing frequency or loudspeaker audio.
In addition, it is necessary to explanation, above-mentioned VR/AR equipment in executing sound method of adjustment, can also be arranged open or The switch option of closing;Wherein, user can be turned on and off by setting, and sound adjustment is being executed to control VR/AR equipment Being turned on and off when method.
In traditional scheme, the audio direction sense identification in VR/AR scenes is relatively low, especially when user's head turn to or When position of the person in VR/AR scenes changes, audio output is invariable, and user has no way of judging Sounnd source direction either The judgement that user carries out mistake is misled, such as:Sound source position is on the left of user in VR/AR scenes, when user is moved to a sound source left side When side, according to the sound source of traditional scheme perception still on the left of user, but sound source has changed to user right in actual scene, therefore passes System scheme poor practicability user's feeling of immersion is weak, and user experience is poor;Say that traditional scheme is set more like one kind from the subjective feeling of people It is the case where immobilizing to determine position and direction of the user in VR/AR scenes, and such case is static with telecomputer etc. Motionless equipment is relatively applicable in when being viewing carrier, but VR/AR equipment is quite different, they can move with user, turn To;That is, the embodiment of the present invention is accurately positioned the position of user, face orientation, to calculate sound source apart from user or so The distance of ear is simulated human ear judges sound source position in actual life experience and mode, is reproduced in virtual VR/AR scenes User's energy localization of sound source true to nature is allowed to promote user experience to increase user's feeling of immersion by accurate volume, delay.
The sound method of adjustment that the embodiment of the present invention is provided obtains user's head deflection angle in actual scene first Degree, then, according to user location and user's head in sound source position, VR/AR scenes in predetermined VR/AR scenes in reality Deflection angle in the scene of border determines that sound source to the distance and sound source of the left ear of user to the distance of user's auris dextra, also just obtains After user's head deflects sound source to the left ear of user distance and sound source to user's auris dextra distance, according to sound source to user The distance and sound source of left ear determine that the left ear of user receives the parameter and user's auris dextra of sound respectively to the distance of user's auris dextra The parameter of sound is received, finally, the parameter of the parameter and user's auris dextra reception sound of sound is received according to the left ear of user, is controlled respectively Sound source processed exports the parameter of the sound to the left ear of user and user's auris dextra, to adjust separately the sound and use that the left ear of user receives The sound that family auris dextra receives;That is, the embodiment of the present invention by obtain user's head deflection angle in actual scene, Position of the sound source with respect to user left ear and user's auris dextra is redefined out, and then controls sound source to reach adjustment user Zuo Erjie The purpose for the sound that the sound and user's auris dextra received receives, in this way so that user can receive suitable self-position Sound, to improve user's feeling of immersion and user experience.
Based on same inventive concept, the embodiment of the present invention also provides a kind of sound adjusting apparatus, and Fig. 4 is the embodiment of the present invention The structural schematic diagram of middle sound adjusting apparatus, as shown in figure 4, the device includes:Acquisition module 41, the first determining module 42, Two determining modules 43 and adjustment module 44;
Wherein, acquisition module 41, for obtaining user's head deflection angle in actual scene;First determining module 42, For according to user location and user's head in sound source position, VR/AR scenes in predetermined VR/AR scenes in actual field Deflection angle in scape, determine sound source to the left ear of user distance and sound source to user's auris dextra distance;Second determining module 43, For, to the distance of user's auris dextra, determining that the left ear of user receives sound respectively according to the distance and sound source of sound source to the left ear of user Parameter and user's auris dextra receive sound parameter;Module 44 is adjusted, parameter and use for receiving sound according to the left ear of user Family auris dextra receives the parameter of sound, the parameter that sound source exports the sound to the left ear of user and user's auris dextra is controlled respectively, with respectively The sound that the sound and user's auris dextra that the adjustment left ear of user receives receive.
Wherein, deflection angle of the above-mentioned user's head in actual scene is the deflection angle of user's head in the horizontal plane Degree;The above-mentioned left ear of user receives the parameter of sound or the parameter of user's auris dextra reception sound includes:Loudness value, and/or when lag It is long.
In order to determine that sound source to the distance and sound source of the left ear of user to the distance of user's auris dextra, is optionally implemented a kind of In example, the first determining module 41, including:First determination sub-module, for according to user location in predetermined VR/AR scenes With user's head in actual scene deflection angle, determine in VR/AR scenes user in the left ear position of user and VR/AR scenes Right ear position;Second determination sub-module, for according to being used in sound source position and VR/AR scenes in predetermined VR/AR scenes The family positions Zuo Er determine sound source to the distance of the left ear of user;Third determination sub-module, for according to predetermined VR/AR User's right ear position in sound source position and VR/AR scenes, determines sound source to the distance of user's auris dextra in scene.
Further, after user's head deflects, in order to determine the left ear of user in the VR/AR scenes after deflection User's right ear position in position and VR/AR scenes, in a kind of optional embodiment, the first determination sub-module is specifically used for root According to angle and the user's head deflection angle in actual scene before user's head deflection in predetermined VR/AR scenes, determine Go out in VR/AR scenes angle after user's head deflection;According to user location and VR/AR scenes in predetermined VR/AR scenes Angle after the deflection of middle user's head determines in VR/AR scenes user's right ear position in the left ear position of user and VR/AR scenes.
In practical applications, acquisition module 41, the first determining module 42, the second determining module 43, adjustment module 44, first Determination sub-module, the second determination sub-module and third determination sub-module can by positioned at device CPU, microprocessor (MPU, Microprocessor Unit), application-specific integrated circuit (ASIC, Application Specific Integrated ) or the realizations such as field programmable gate array (FPGA, Field-Programmable Gate Array) Circuit.
The present embodiment records a kind of computer-readable medium, can be ROM (for example, read-only memory, FLASH memory, Transfer device etc.), magnetic storage medium (for example, tape, disc driver etc.), optical storage medium is (for example, CD-ROM, DVD- ROM, paper card, paper tape etc.) and other well-known types program storage;Computer is stored in computer-readable medium to be held Row instruction, when executing an instruction, it includes operation below to cause at least one processor execution:
Obtain user's head deflection angle in actual scene;According to sound source position in predetermined VR/AR scenes, User location and the user's head deflection angle in actual scene in VR/AR scenes, determine sound source to the left ear of user away from From with a distance from sound source to user's auris dextra;According to the distance of sound source to the left ear of user and sound source to the distance of user's auris dextra, distinguish Determine that the left ear of user receives the parameter of the parameter and user's auris dextra reception sound of sound;The ginseng of sound is received according to the left ear of user Number and user's auris dextra receive the parameter of sound, control the parameter that sound source exports the sound to the left ear of user and user's auris dextra respectively, To adjust separately the sound that the left ear of user receives and the sound that user's auris dextra receives.
The sound method of adjustment that the embodiment of the present invention is provided obtains user's head deflection angle in actual scene first Degree, then, according to user location and user's head in sound source position, VR/AR scenes in predetermined VR/AR scenes in reality Deflection angle in the scene of border determines that sound source to the distance and sound source of the left ear of user to the distance of user's auris dextra, also just obtains After user's head deflects sound source to the left ear of user distance and sound source to user's auris dextra distance, according to sound source to user The distance and sound source of left ear determine that the left ear of user receives the parameter and user's auris dextra of sound respectively to the distance of user's auris dextra The parameter of sound is received, finally, the parameter of the parameter and user's auris dextra reception sound of sound is received according to the left ear of user, is controlled respectively Sound source processed exports the parameter of the sound to the left ear of user and user's auris dextra, to adjust separately the sound and use that the left ear of user receives The sound that family auris dextra receives;That is, the embodiment of the present invention by obtain user's head deflection angle in actual scene, Position of the sound source with respect to user left ear and user's auris dextra is redefined out, and then controls sound source to reach adjustment user Zuo Erjie The purpose for the sound that the sound and user's auris dextra received receives, in this way so that user can receive suitable self-position Sound, to improve user's feeling of immersion and user experience.
It need to be noted that be:Apparatus above implements the description of item, is similar with above method description, has same The identical advantageous effect of embodiment of the method, therefore do not repeat.For undisclosed technical detail in apparatus of the present invention embodiment, Those skilled in the art please refers to the description of the method for the present invention embodiment and understands, to save length, which is not described herein again.
It need to be noted that be:
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text mean it is related with embodiment A particular feature, structure, or characteristic includes at least one embodiment of the present invention.Therefore, occur everywhere in the whole instruction " in one embodiment " or " in one embodiment " not necessarily refer to identical embodiment.In addition, these specific feature, knots Structure or characteristic can in any suitable manner combine in one or more embodiments.It should be understood that in the various implementations of the present invention In example, size of the sequence numbers of the above procedures is not meant that the order of the execution order, and the execution sequence of each process should be with its work( It can determine that the implementation process of the embodiments of the invention shall not be constituted with any limitation with internal logic.The embodiments of the present invention Serial number is for illustration only, can not represent the quality of embodiment.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements include not only those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this There is also other identical elements in the process of element, method, article or device.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as:Multiple units or component can combine, or It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion It can be the INDIRECT COUPLING by some interfaces, equipment or unit to divide mutual coupling or direct-coupling or communication connection Or communication connection, can be electrical, mechanical or other forms.
The above-mentioned unit illustrated as separating component can be or may not be and be physically separated, aobvious as unit The component shown can be or may not be physical unit;Both it can be located at a place, may be distributed over multiple network lists In member;Some or all of wherein unit can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a processing unit, also may be used It, can also be during two or more units be integrated in one unit to be each unit individually as a unit;It is above-mentioned The form that hardware had both may be used in integrated unit is realized, can also be realized in the form of hardware adds SFU software functional unit.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in computer read/write memory medium, which exists When execution, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:Movable storage device read-only is deposited The various media that can store program code such as reservoir (Read Only Memory, ROM), magnetic disc or CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product Sale in use, can also be stored in a computer read/write memory medium.Based on this understanding, the present invention is implemented Substantially the part that contributes to existing technology can be expressed in the form of software products the technical solution of example in other words, The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention. And storage medium above-mentioned includes:Various Jie that can store program code such as movable storage device, ROM, magnetic disc or CD Matter.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (10)

1. a kind of sound method of adjustment, which is characterized in that including:
Obtain user's head deflection angle in actual scene;
According to user location in sound source position, the VR/AR scenes in predetermined Virtual Reality/augmented reality AR scenes And user's head deflection angle in actual scene, determine that sound source is right to user to the distance and sound source of the left ear of user The distance of ear;
According to the distance of the sound source to the left ear of user and the sound source to the distance of user's auris dextra, the left ear of user is determined respectively Receive the parameter of the parameter and user's auris dextra reception sound of sound;
The parameter that the parameter and user's auris dextra reception sound of sound are received according to the left ear of the user, controls the sound respectively Source exports the parameter of the sound to the left ear of user and user's auris dextra, to adjust separately the sound and institute that the left ear of the user receives State the sound that user's auris dextra receives.
2. according to the method described in claim 1, it is characterized in that, described according to sound source position in predetermined VR/AR scenes It sets, user location and the user's head deflection angle in actual scene in the VR/AR scenes, determines sound source to use The distance and sound source of the left ear in family to user's auris dextra distance, including:
According to user location and the user's head deflection angle in actual scene in the predetermined VR/AR scenes, Determine in the VR/AR scenes user's right ear position in the left ear position of user and the VR/AR scenes;
According to the left ear position of user in sound source position in the predetermined VR/AR scenes and the VR/AR scenes, determine The sound source to the left ear of user distance;
According to user's right ear position in sound source position in the predetermined VR/AR scenes and the VR/AR scenes, determine The sound source to user's auris dextra distance.
3. according to the method described in claim 2, it is characterized in that, described according to user position in the predetermined VR/AR The deflection angle in actual scene is set with the user's head, determines in the VR/AR scenes the left ear position of user and described User's right ear position in VR/AR scenes, including:
It is deflected in actual scene according to angle before user's head deflection in predetermined VR/AR scenes and the user's head Angle determines in VR/AR scenes angle after user's head deflection;
Angle after being deflected according to user's head in user location in the predetermined VR/AR scenes and the VR/AR scenes, Determine in the VR/AR scenes user's right ear position in the left ear position of user and the VR/AR scenes.
4. according to the method described in claim 1, it is characterized in that, deflection angle of the user's head in actual scene is The deflection angle of the user's head in the horizontal plane.
5. according to the method described in claim 1, it is characterized in that, the left ear of the user receives the parameter or user's auris dextra of sound Receive sound parameter include:
Loudness value, and/or lag duration.
6. a kind of sound adjusting apparatus, which is characterized in that including:
Acquisition module, for obtaining user's head deflection angle in actual scene;
First determining module, for according to sound source position in predetermined Virtual Reality/augmented reality/AR scenes, described User location and the user's head deflection angle in actual scene in VR/AR scenes determine sound source to the left ear of user Distance and sound source to user's auris dextra distance;
Second determining module, for the distance according to the distance of the sound source to the left ear of user and the sound source to user's auris dextra, Determine that the left ear of user receives the parameter of the parameter and user's auris dextra reception sound of sound respectively;
Module is adjusted, the parameter and user's auris dextra for receiving sound according to the left ear of the user receive the parameter of sound, The parameter for controlling the sound that the sound source is exported to the left ear of user and user's auris dextra respectively, to adjust separately the user Zuo Erjie The sound that the sound and user's auris dextra received receives.
7. device according to claim 6, which is characterized in that the first determining module, including:
First determination sub-module, for being existed according to user location and the user's head in the predetermined VR/AR scenes Deflection angle in actual scene determines in the VR/AR scenes user's auris dextra in the left ear position of user and the VR/AR scenes Position;
Second determination sub-module, for according in sound source position and the VR/AR scenes in the predetermined VR/AR scenes The left ear position of user, determines the sound source to the distance of the left ear of user;
Third determination sub-module, for according in sound source position and the VR/AR scenes in the predetermined VR/AR scenes User's right ear position determines the sound source to the distance of user's auris dextra.
8. device according to claim 7, which is characterized in that the first determination sub-module is specifically used for according to predetermined VR/AR scenes in angle and the user's head deflection angle in actual scene before user's head deflection, determine VR/AR Angle after user's head deflection in scene;According to user location and the VR/AR scenes in the predetermined VR/AR scenes Angle after the deflection of middle user's head determines that user is right in the left ear position of user and the VR/AR scenes in the VR/AR scenes Ear position.
9. device according to claim 6, which is characterized in that deflection angle of the user's head in actual scene be The deflection angle of the user's head in the horizontal plane.
10. device according to claim 6, which is characterized in that the left ear of user receives parameter or the user right side of sound Ear receive sound parameter include:
Loudness value, and/or lag duration.
CN201710035979.XA 2017-01-17 2017-01-17 A kind of sound method of adjustment and device Pending CN108319439A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710035979.XA CN108319439A (en) 2017-01-17 2017-01-17 A kind of sound method of adjustment and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710035979.XA CN108319439A (en) 2017-01-17 2017-01-17 A kind of sound method of adjustment and device

Publications (1)

Publication Number Publication Date
CN108319439A true CN108319439A (en) 2018-07-24

Family

ID=62892166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710035979.XA Pending CN108319439A (en) 2017-01-17 2017-01-17 A kind of sound method of adjustment and device

Country Status (1)

Country Link
CN (1) CN108319439A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110270094A (en) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 A kind of method and device of game sound intermediate frequency control
US10952006B1 (en) 2020-10-20 2021-03-16 Katmai Tech Holdings LLC Adjusting relative left-right sound to provide sense of an avatar's position in a virtual space, and applications thereof
US10979672B1 (en) 2020-10-20 2021-04-13 Katmai Tech Holdings LLC Web-based videoconference virtual environment with navigable avatars, and applications thereof
CN113055810A (en) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 Sound effect control method, device, system, vehicle and storage medium
US11070768B1 (en) 2020-10-20 2021-07-20 Katmai Tech Holdings LLC Volume areas in a three-dimensional virtual conference space, and applications thereof
US11076128B1 (en) 2020-10-20 2021-07-27 Katmai Tech Holdings LLC Determining video stream quality based on relative position in a virtual space, and applications thereof
US11095857B1 (en) 2020-10-20 2021-08-17 Katmai Tech Holdings LLC Presenter mode in a three-dimensional virtual conference space, and applications thereof
US11184362B1 (en) 2021-05-06 2021-11-23 Katmai Tech Holdings LLC Securing private audio in a virtual conference, and applications thereof
US11457178B2 (en) 2020-10-20 2022-09-27 Katmai Tech Inc. Three-dimensional modeling inside a virtual video conferencing environment with a navigable avatar, and applications thereof
US11562531B1 (en) 2022-07-28 2023-01-24 Katmai Tech Inc. Cascading shadow maps in areas of a three-dimensional environment
US11593989B1 (en) 2022-07-28 2023-02-28 Katmai Tech Inc. Efficient shadows for alpha-mapped models
US11651108B1 (en) 2022-07-20 2023-05-16 Katmai Tech Inc. Time access control in virtual environment application
US11682164B1 (en) 2022-07-28 2023-06-20 Katmai Tech Inc. Sampling shadow maps at an offset
US11700354B1 (en) 2022-07-21 2023-07-11 Katmai Tech Inc. Resituating avatars in a virtual environment
US11704864B1 (en) 2022-07-28 2023-07-18 Katmai Tech Inc. Static rendering for a combination of background and foreground objects
US11711494B1 (en) 2022-07-28 2023-07-25 Katmai Tech Inc. Automatic instancing for efficient rendering of three-dimensional virtual environment
US11741664B1 (en) 2022-07-21 2023-08-29 Katmai Tech Inc. Resituating virtual cameras and avatars in a virtual environment
US11743430B2 (en) 2021-05-06 2023-08-29 Katmai Tech Inc. Providing awareness of who can hear audio in a virtual conference, and applications thereof
US11748939B1 (en) 2022-09-13 2023-09-05 Katmai Tech Inc. Selecting a point to navigate video avatars in a three-dimensional environment
US11776203B1 (en) 2022-07-28 2023-10-03 Katmai Tech Inc. Volumetric scattering effect in a three-dimensional virtual environment with navigable video avatars
US11876630B1 (en) 2022-07-20 2024-01-16 Katmai Tech Inc. Architecture to control zones
US11928774B2 (en) 2022-07-20 2024-03-12 Katmai Tech Inc. Multi-screen presentation in a virtual videoconferencing environment
US11956571B2 (en) 2022-07-28 2024-04-09 Katmai Tech Inc. Scene freezing and unfreezing

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110270094A (en) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 A kind of method and device of game sound intermediate frequency control
US11457178B2 (en) 2020-10-20 2022-09-27 Katmai Tech Inc. Three-dimensional modeling inside a virtual video conferencing environment with a navigable avatar, and applications thereof
US10952006B1 (en) 2020-10-20 2021-03-16 Katmai Tech Holdings LLC Adjusting relative left-right sound to provide sense of an avatar's position in a virtual space, and applications thereof
US10979672B1 (en) 2020-10-20 2021-04-13 Katmai Tech Holdings LLC Web-based videoconference virtual environment with navigable avatars, and applications thereof
US11070768B1 (en) 2020-10-20 2021-07-20 Katmai Tech Holdings LLC Volume areas in a three-dimensional virtual conference space, and applications thereof
US11076128B1 (en) 2020-10-20 2021-07-27 Katmai Tech Holdings LLC Determining video stream quality based on relative position in a virtual space, and applications thereof
US11095857B1 (en) 2020-10-20 2021-08-17 Katmai Tech Holdings LLC Presenter mode in a three-dimensional virtual conference space, and applications thereof
US11290688B1 (en) 2020-10-20 2022-03-29 Katmai Tech Holdings LLC Web-based videoconference virtual environment with navigable avatars, and applications thereof
CN113055810A (en) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 Sound effect control method, device, system, vehicle and storage medium
US11743430B2 (en) 2021-05-06 2023-08-29 Katmai Tech Inc. Providing awareness of who can hear audio in a virtual conference, and applications thereof
US11184362B1 (en) 2021-05-06 2021-11-23 Katmai Tech Holdings LLC Securing private audio in a virtual conference, and applications thereof
US11928774B2 (en) 2022-07-20 2024-03-12 Katmai Tech Inc. Multi-screen presentation in a virtual videoconferencing environment
US11651108B1 (en) 2022-07-20 2023-05-16 Katmai Tech Inc. Time access control in virtual environment application
US11876630B1 (en) 2022-07-20 2024-01-16 Katmai Tech Inc. Architecture to control zones
US11741664B1 (en) 2022-07-21 2023-08-29 Katmai Tech Inc. Resituating virtual cameras and avatars in a virtual environment
US11700354B1 (en) 2022-07-21 2023-07-11 Katmai Tech Inc. Resituating avatars in a virtual environment
US11711494B1 (en) 2022-07-28 2023-07-25 Katmai Tech Inc. Automatic instancing for efficient rendering of three-dimensional virtual environment
US11562531B1 (en) 2022-07-28 2023-01-24 Katmai Tech Inc. Cascading shadow maps in areas of a three-dimensional environment
US11704864B1 (en) 2022-07-28 2023-07-18 Katmai Tech Inc. Static rendering for a combination of background and foreground objects
US11776203B1 (en) 2022-07-28 2023-10-03 Katmai Tech Inc. Volumetric scattering effect in a three-dimensional virtual environment with navigable video avatars
US11682164B1 (en) 2022-07-28 2023-06-20 Katmai Tech Inc. Sampling shadow maps at an offset
US11593989B1 (en) 2022-07-28 2023-02-28 Katmai Tech Inc. Efficient shadows for alpha-mapped models
US11956571B2 (en) 2022-07-28 2024-04-09 Katmai Tech Inc. Scene freezing and unfreezing
US11748939B1 (en) 2022-09-13 2023-09-05 Katmai Tech Inc. Selecting a point to navigate video avatars in a three-dimensional environment

Similar Documents

Publication Publication Date Title
CN108319439A (en) A kind of sound method of adjustment and device
CN109691141B (en) Spatialization audio system and method for rendering spatialization audio
US20150382130A1 (en) Camera based adjustments to 3d soundscapes
CN104205880A (en) Audio control based on orientation
EP3884335B1 (en) Systems and methods for maintaining directional wireless links of motile devices
CN113396337A (en) Audio enhancement using environmental data
CN104503092A (en) Three-dimensional display method and three-dimensional display device adaptive to different angles and distances
US11112389B1 (en) Room acoustic characterization using sensors
US10819953B1 (en) Systems and methods for processing mixed media streams
US11902735B2 (en) Artificial-reality devices with display-mounted transducers for audio playback
US11026024B2 (en) System and method for producing audio data to head mount display device
CN103702259A (en) Interacting device and interacting method
US11356795B2 (en) Spatialized audio relative to a peripheral device
CN107632704B (en) Mixed reality audio control method based on optical positioning and service equipment
CN105260016A (en) Information processing method and electronic equipment
US11363385B1 (en) High-efficiency motor for audio actuation
CN114097019A (en) Asymmetric pixel operation for compensating optical limitations of a lens
EP4214535A2 (en) Methods and systems for determining position and orientation of a device using acoustic beacons
CN110622106B (en) Apparatus and method for audio processing
US10735885B1 (en) Managing image audio sources in a virtual acoustic environment
EP4329086A1 (en) Apparatuses, systems, and methods for reducing transmission line insertion loss using trimming lines
US11638111B2 (en) Systems and methods for classifying beamformed signals for binaural audio playback
KR20180041464A (en) Method for processing a sound in virtual reality game and application
CN109144265A (en) Display changeover method, device, wearable device and storage medium
US20240012254A1 (en) Coexistence of active dimming display layers and antennas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180724

WD01 Invention patent application deemed withdrawn after publication