CN109348400B - Method for pre-judging main body pose of 3D sound effect - Google Patents

Method for pre-judging main body pose of 3D sound effect Download PDF

Info

Publication number
CN109348400B
CN109348400B CN201811077859.7A CN201811077859A CN109348400B CN 109348400 B CN109348400 B CN 109348400B CN 201811077859 A CN201811077859 A CN 201811077859A CN 109348400 B CN109348400 B CN 109348400B
Authority
CN
China
Prior art keywords
information
main body
pose
sound
judging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811077859.7A
Other languages
Chinese (zh)
Other versions
CN109348400A (en
Inventor
王小玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ziteng Culture Technology Co.,Ltd.
Original Assignee
Taizhou Shengchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taizhou Shengchuang Technology Co ltd filed Critical Taizhou Shengchuang Technology Co ltd
Priority to CN201811077859.7A priority Critical patent/CN109348400B/en
Publication of CN109348400A publication Critical patent/CN109348400A/en
Application granted granted Critical
Publication of CN109348400B publication Critical patent/CN109348400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field

Abstract

The invention relates to a method for prejudging the main body pose of a 3D sound effect, which provides main body pose capture equipment, a left power amplifier corresponding to the left ear of a main body, a right power amplifier corresponding to the right ear of the main body and a sound model, wherein the main body pose capture equipment is used for capturing the pose information of the main body, the pose information of the main body is input into the sound model to obtain sound playing information, and the left power amplifier and the right power amplifier are used for playing the sound playing information; and optimizing by judging the steering habit of the main body in a preset scene and moment in a self-learning algorithm mode to obtain the rotating direction of the pre-judged main body at the next moment, and acquiring corresponding sound information in advance to improve the precision.

Description

Method for pre-judging main body pose of 3D sound effect
Technical Field
The invention relates to the field of 3D sound effects, in particular to a method for prejudging a main body pose of a 3D sound effect.
Background
The main body pose prejudging method 3D technology is the biggest difference from the traditional method in that a set of loudspeakers or earphones can be used for generating vivid stereo effect and positioning sound sources surrounding different positions of the user. This ability to track the source, known as the localization effect, uses the current HRTF functionality to achieve this surprising effect.
The full name of the HRTF is He main body pose pre-judging method d-Rel main body pose pre-judging method ted Tr main body pose pre-judging method nsfer function (head related position conversion), which is a method for monitoring and distinguishing sound sources of human ears in a three-dimensional space. Briefly, sound waves are transmitted to your ear successively at a distance of a few millionths of a second, and our brain can distinguish these subtle differences, and use these differences to distinguish the form of the sound waves, which are then converted into the location of the sound in space.
On most of the existing sound cards with 3D sound effects, the sound effect in the game is converted by using an HRTF (head related transfer function) conversion method, and the brain of a user is misled to hear the sound from different places. Games that support sound source localization combine sounds with the game's objects, characters, or other sources of sound, and the sound card adjusts the transmission of sound signals based on relative position as these sounds and your position in the game change.
At present, along with the popularization and application of 3D glasses technology, the pose of the head of a main body can be sampled, the original 3D sound effect method firstly needs to determine a sound source, then carries out conversion according to the position of the sound source and the content of the sound source and the position of the main body, and then carries out conversion according to the pose of the main body to obtain sound data output by different power amplification units, so that time delay is easy to occur and user experience is influenced due to large calculated amount, at present, the sound data are obtained in advance through preprocessing to ensure the continuity of the data, the principle is to calculate the possible pose of the main body in advance and then the actual pose of the main body, if the actual pose of the main body is the same as the predicted pose, the calculation result can be directly output, the processing time is shortened, the continuity of the sound is ensured, but the accuracy of the prediction is a big problem, if the continuous prediction is wrong, a large time delay is caused, and the sound effect is influenced.
Disclosure of Invention
In view of the above, the present invention provides a method for predicting the main pose of a 3D sound effect to solve the above problems.
In order to solve the technical problems, the technical scheme of the invention is as follows: a main body pose prejudging method of a 3D sound effect comprises the steps of providing a main body pose capturing device, a left power amplifier corresponding to a left ear of a main body, a right power amplifier corresponding to a right ear of the main body and a sound model, wherein the main body pose capturing device is used for capturing pose information of the main body, the pose information of the main body is input into the sound model to obtain sound playing information, and the left power amplifier and the right power amplifier are used for playing the sound playing information;
the method specifically comprises the following steps:
a first acquisition step of acquiring pose information of the subject at that time;
a pose prejudging step, generating direction prejudging information by a random weight distribution algorithm according to the pose information of the main body at the moment and the current moment, wherein each direction prejudging information comprises a direction prejudging angle range, the direction prejudging angle range reflects the rotation angle of the main body, the prejudging pose information of the main body at the next moment is generated according to the direction prejudging information, the random weight distribution algorithm is distributed with a corresponding weight value according to each direction prejudging information at the moment, and the larger the corresponding weight value is, the more easily the direction prejudging information is selected by the random weight distribution algorithm;
a pre-judging calculation step, namely inputting the pre-judging pose information into the sound model to obtain pre-judging sound playing information;
a second acquisition step of acquiring pose information of the main body at the next moment;
a pose judgment step of calculating the rotation direction of the main body according to the pose information obtained in the first acquisition step and the pose information obtained in the second acquisition step, and if the rotation direction of the main body is within the direction pre-judgment angle range, playing the pre-judged sound playing information and increasing the weight value of the direction pre-judgment information; and if the rotating direction of the main body is out of the direction pre-judging angle range, inputting the pose information of the main body at the next moment into the sound model, playing the sound playing information output by the sound model, and reducing the weight value of the direction pre-judging information.
Further, at each time, the rotation angle of the main body is the rotation angle of the main body in the three-dimensional space in the acoustic model, the number of the corresponding direction pre-judging information is set to be 16, and the direction pre-judging angle range corresponding to each direction pre-judging information is set to be 90 degrees.
Further, the rotation angle of the main body is an equivalent rotation angle of a preset plane of the main body in the acoustic model.
Further, at each time, the corresponding direction anticipation information is set to be 12, and the direction anticipation angle range corresponding to each direction anticipation information is set to be 30 degrees.
Further, the pose determination step is configured with a division substep, the division substep is configured with a division weight threshold, when a weight value corresponding to one direction prejudgment information exceeds the weight threshold, the direction prejudgment information is divided into at least two new direction prejudgment information, and each new direction prejudgment information is divided into the weight value of the original direction prejudgment information and a direction prejudgment angle range.
The technical effects of the invention are mainly reflected in the following aspects: through the arrangement, the turning habit of the main body under the preset scene and moment is judged in a self-learning algorithm mode to be optimized to obtain the turning direction of the pre-judged main body at the next moment, corresponding sound information is obtained in advance, and the precision is improved.
Detailed Description
The embodiments of the present invention are described in further detail to make the technical solutions of the present invention easier to understand and master.
A main body pose pre-judging method for a 3D sound effect comprises the steps of providing a main body pose capturing device, a left power amplifier corresponding to a left ear of a main body, a right power amplifier corresponding to a right ear of the main body and a sound model, wherein the main body pose capturing device is used for capturing pose information of the main body, the pose information of the main body is input into the sound model to obtain sound playing information, and the left power amplifier and the right power amplifier are used for playing the sound playing information. The invention aims to construct habits according to original facing information, namely, under a specific scene, the pose habits of a user certainly have a certain rule, and therefore, the sound information obtained by calculation in advance is more likely to accord with the actual rotation direction of the user, so that a prejudgment effect can be achieved.
The method specifically comprises the following steps:
a first acquisition step of acquiring pose information of the subject at that time;
a pose prejudging step, generating direction prejudging information by a random weight distribution algorithm according to the pose information of the main body at the moment and the current moment, wherein each direction prejudging information comprises a direction prejudging angle range, the direction prejudging angle range reflects the rotation angle of the main body, the prejudging pose information of the main body at the next moment is generated according to the direction prejudging information, the random weight distribution algorithm is distributed with a corresponding weight value according to each direction prejudging information at the moment, and the larger the corresponding weight value is, the more easily the direction prejudging information is selected by the random weight distribution algorithm; in one embodiment, at each time, the rotation angle of the main body is the rotation angle of the main body in the three-dimensional space in the acoustic model, the number of the corresponding direction anticipation information is set to 16, and the range of the direction anticipation angle corresponding to each direction anticipation information is set to 90 degrees. Firstly, in theory, the pose information of the main body is three-dimensional pose information, and the relative rotation direction is also realized in a three-dimensional space, so that the acquisition of the direction is realized through 16 pieces of direction prejudgment information.
In another embodiment, the rotation angle of the main body is an equivalent rotation angle of a preset plane of the main body in the acoustic model. At each moment, the number of the corresponding direction pre-judging information is set to be 12, and the direction pre-judging angle range corresponding to each direction pre-judging information is set to be 30 degrees. It should be noted that the ability of the human ear to distinguish the upper position from the lower position is poor, so the present invention preferably equalizes the upper and lower rotation directions to the plane, that is, only judges the circumferential rotation of the main body, so that the data processing efficiency can be optimized, and the precision can be improved.
A pre-judging calculation step, namely inputting the pre-judging pose information into the sound model to obtain pre-judging sound playing information;
a second acquisition step of acquiring pose information of the main body at the next moment; the pose information of the subject at the next moment can be obtained by the second obtaining step.
A pose judgment step of calculating the rotation direction of the main body according to the pose information obtained in the first acquisition step and the pose information obtained in the second acquisition step, and if the rotation direction of the main body is within the direction pre-judgment angle range, playing the pre-judged sound playing information and increasing the weight value of the direction pre-judgment information; and if the rotating direction of the main body is out of the direction pre-judging angle range, inputting the pose information of the main body at the next moment into the sound model, playing the sound playing information output by the sound model, and reducing the weight value of the direction pre-judging information. If the judgment is correct, outputting the calculated audio information, and if the judgment is wrong, not outputting the audio information, and increasing or decreasing the corresponding weight value according to the correct or wrong result. Thus, under the condition of multiple sample training, the judgment tends to be accurate, and it needs to be explained that multiple steps can be executed under the condition of time permission, and the judgment information is not only one direction.
The pose judgment step is configured with a division substep, the division substep is configured with a division weight threshold, when the weight value corresponding to one direction prejudgment information exceeds the weight threshold, the direction prejudgment information is divided into at least two new direction prejudgment information, and each new direction prejudgment information is divided into the weight value of the original direction prejudgment information and the direction prejudgment angle range. In this way a more reliable effect is achieved, which is more accurate under training in the direction of rotation.
The above are only typical examples of the present invention, and besides, the present invention may have other embodiments, and all the technical solutions formed by equivalent substitutions or equivalent changes are within the scope of the present invention as claimed.

Claims (5)

1. A main body pose prejudging method for a 3D sound effect is characterized by comprising the steps of providing a main body pose capturing device, a left power amplifier corresponding to a left ear of a main body, a right power amplifier corresponding to a right ear of the main body and a sound model, wherein the main body pose capturing device is used for capturing pose information of the main body, the pose information of the main body is input into the sound model to obtain sound playing information, and the left power amplifier and the right power amplifier are used for playing the sound playing information;
the method specifically comprises the following steps:
a first acquisition step of acquiring pose information of the subject at that time;
a pose prejudging step, generating direction prejudging information by a random weight distribution algorithm according to the pose information of the main body at the moment and the current moment, wherein each direction prejudging information comprises a direction prejudging angle range, the direction prejudging angle range reflects the rotation angle of the main body, the prejudging pose information of the main body at the next moment is generated according to the direction prejudging information, the random weight distribution algorithm is distributed with a corresponding weight value according to each direction prejudging information at the moment, and the larger the corresponding weight value is, the more easily the direction prejudging information is selected by the random weight distribution algorithm;
a pre-judging calculation step, namely inputting the pre-judging pose information into the sound model to obtain pre-judging sound playing information;
a second acquisition step of acquiring pose information of the main body at the next moment;
a pose judgment step of calculating the rotation direction of the main body according to the pose information in the first acquisition step and the pose information obtained in the second acquisition step, and if the rotation direction of the main body is within the direction pre-judgment angle range, playing the pre-judged sound playing information and increasing the weight value of the direction pre-judgment information; and if the rotating direction of the main body is out of the direction pre-judging angle range, inputting the pose information of the main body at the next moment into the sound model, playing the sound playing information output by the sound model, and reducing the weight value of the direction pre-judging information.
2. The method for pre-judging the pose of a main body of a 3D sound effect according to claim 1, wherein at each moment, the rotation angle of the main body is the rotation angle of the main body in a three-dimensional space in a sound model, 16 pieces of corresponding direction pre-judging information are set, and the range of the direction pre-judging angle corresponding to each piece of direction pre-judging information is set to be 90 degrees.
3. The method for pre-judging the pose of a main body of a 3D sound effect as claimed in claim 1, wherein the rotation angle of the main body is an equivalent rotation angle of a preset plane of the main body in the sound model.
4. The method for pre-judging the pose of a main body of a 3D sound effect as claimed in claim 3, wherein at each moment, the corresponding direction pre-judging information is set to 12, and the direction pre-judging angle range corresponding to each direction pre-judging information is set to 30 degrees.
5. The method for prejudging the main body pose of the 3D sound effect according to claim 1, wherein the pose judgment step is configured with a division substep, the division substep is configured with a division weight threshold, when a weight value corresponding to one direction prejudgment information exceeds the weight threshold, the direction prejudgment information is divided into at least two new direction prejudgment information, and each new direction prejudgment information is divided into the weight value of the original direction prejudgment information and a direction prejudgment angle range.
CN201811077859.7A 2018-09-16 2018-09-16 Method for pre-judging main body pose of 3D sound effect Active CN109348400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811077859.7A CN109348400B (en) 2018-09-16 2018-09-16 Method for pre-judging main body pose of 3D sound effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811077859.7A CN109348400B (en) 2018-09-16 2018-09-16 Method for pre-judging main body pose of 3D sound effect

Publications (2)

Publication Number Publication Date
CN109348400A CN109348400A (en) 2019-02-15
CN109348400B true CN109348400B (en) 2020-08-04

Family

ID=65305775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811077859.7A Active CN109348400B (en) 2018-09-16 2018-09-16 Method for pre-judging main body pose of 3D sound effect

Country Status (1)

Country Link
CN (1) CN109348400B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11726551B1 (en) 2020-08-26 2023-08-15 Apple Inc. Presenting content based on activity

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000197198A (en) * 1998-12-25 2000-07-14 Matsushita Electric Ind Co Ltd Sound image moving device
CN102163254A (en) * 2010-02-15 2011-08-24 索尼公司 Content reproduction apparatus, mobile appliance, and abnormality detection method
CN103314391A (en) * 2011-01-05 2013-09-18 索弗特凯耐提克软件公司 Natural gesture based user interface methods and systems
WO2015024881A1 (en) * 2013-08-20 2015-02-26 Bang & Olufsen A/S A system for and a method of generating sound
CN106388831A (en) * 2016-11-04 2017-02-15 郑州航空工业管理学院 Method for detecting falling actions based on sample weighting algorithm
CN106462747A (en) * 2014-06-17 2017-02-22 河谷控股Ip有限责任公司 Activity recognition systems and methods
CN108351986A (en) * 2015-10-30 2018-07-31 株式会社摩如富 Learning system, learning device, learning method, learning program, training data generating means, training data generation method, training data generate program, terminal installation and threshold value change device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000197198A (en) * 1998-12-25 2000-07-14 Matsushita Electric Ind Co Ltd Sound image moving device
CN102163254A (en) * 2010-02-15 2011-08-24 索尼公司 Content reproduction apparatus, mobile appliance, and abnormality detection method
CN103314391A (en) * 2011-01-05 2013-09-18 索弗特凯耐提克软件公司 Natural gesture based user interface methods and systems
WO2015024881A1 (en) * 2013-08-20 2015-02-26 Bang & Olufsen A/S A system for and a method of generating sound
CN106462747A (en) * 2014-06-17 2017-02-22 河谷控股Ip有限责任公司 Activity recognition systems and methods
CN108351986A (en) * 2015-10-30 2018-07-31 株式会社摩如富 Learning system, learning device, learning method, learning program, training data generating means, training data generation method, training data generate program, terminal installation and threshold value change device
CN106388831A (en) * 2016-11-04 2017-02-15 郑州航空工业管理学院 Method for detecting falling actions based on sample weighting algorithm

Also Published As

Publication number Publication date
CN109348400A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
US9992603B1 (en) Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device
Carlile Virtual Auditory Space: Generation and
US10966046B2 (en) Spatial repositioning of multiple audio streams
CN113889125B (en) Audio generation method and device, computer equipment and storage medium
Spagnol et al. Current use and future perspectives of spatial audio technologies in electronic travel aids
Braasch et al. A binaural model that analyses acoustic spaces and stereophonic reproduction systems by utilizing head rotations
CN103607550A (en) Method for adjusting virtual sound track of television according to position of watcher and television
US11641561B2 (en) Sharing locations where binaural sound externally localizes
WO2016167007A1 (en) Head-related transfer function selection device, head-related transfer function selection method, head-related transfer function selection program, and sound reproduction device
US20210219089A1 (en) Spatial repositioning of multiple audio streams
CN109348400B (en) Method for pre-judging main body pose of 3D sound effect
CN105246001B (en) Double-ear type sound-recording headphone playback system and method
US20160044432A1 (en) Audio signal processing apparatus
CN109168125B (en) 3D sound effect system
CN108038291B (en) Personalized head-related transfer function generation system and method based on human body parameter adaptation algorithm
CN110446140A (en) Voice signal adjusts system and method
JP2018152834A (en) Method and apparatus for controlling audio signal output in virtual auditory environment
Yuan et al. Sound image externalization for headphone based real-time 3D audio
US10735885B1 (en) Managing image audio sources in a virtual acoustic environment
US20190394583A1 (en) Method of audio reproduction in a hearing device and hearing device
Talagala et al. Novel head related transfer function model for sound source localisation
Illényi et al. Environmental Influence on the fine Structure of Dummy-head HRTFs
EP4362503A1 (en) Spatial audio effect adjustment
US20240056756A1 (en) Method for Generating a Personalised HRTF
Lu et al. 3D sound effect enhancement of front and back

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200708

Address after: 317100 Room 307-1, Building 1, Chaoyang Road, Hairun Street, Sanmen County, Taizhou City, Zhejiang Province

Applicant after: Taizhou Shengchuang Technology Co.,Ltd.

Address before: 365100 No. 132, industrial road, Po Tau Industrial Park, Youxi County, Fujian, Sanming City

Applicant before: Wang Xiaoling

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210809

Address after: 402760 No. 6113-6116, building 6, xiuhushui street, No. 19, Quanshan Road, Biquan street, Bishan District, Chongqing

Patentee after: Chongqing Ziteng Culture Technology Co.,Ltd.

Address before: 317100 Room 307-1, Building 1, Chaoyang Road, Hairun Street, Sanmen County, Taizhou City, Zhejiang Province

Patentee before: Taizhou Shengchuang Technology Co.,Ltd.

TR01 Transfer of patent right