CN107168518B - Synchronization method and device for head-mounted display and head-mounted display - Google Patents

Synchronization method and device for head-mounted display and head-mounted display Download PDF

Info

Publication number
CN107168518B
CN107168518B CN201710218231.3A CN201710218231A CN107168518B CN 107168518 B CN107168518 B CN 107168518B CN 201710218231 A CN201710218231 A CN 201710218231A CN 107168518 B CN107168518 B CN 107168518B
Authority
CN
China
Prior art keywords
sound
head
virtual reality
mounted display
reality image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710218231.3A
Other languages
Chinese (zh)
Other versions
CN107168518A (en
Inventor
李为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pico Technology Co Ltd
Original Assignee
Beijing Pico Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pico Technology Co Ltd filed Critical Beijing Pico Technology Co Ltd
Priority to CN201710218231.3A priority Critical patent/CN107168518B/en
Publication of CN107168518A publication Critical patent/CN107168518A/en
Application granted granted Critical
Publication of CN107168518B publication Critical patent/CN107168518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Stereophonic System (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a head-mounted display synchronization method, a head-mounted display synchronization device and a head-mounted display, wherein the synchronization method comprises the following steps: acquiring a virtual reality image and an original sound signal corresponding to the virtual reality image; determining a sound source corresponding to each original sound signal in the virtual reality image; detecting the posture of the head-mounted display and the position information of the head-mounted display relative to each sound source; adjusting the parameters of each original sound signal according to the attitude and all the position information to obtain a current sound signal; and controlling the sound generating device to play the current sound signal. Through the embodiment of the invention, the synchronization of the sound in the virtual reality system and the real environment can be realized, so that the aim of simulating the sound effect of the real environment is fulfilled, and the user experience is improved.

Description

Synchronization method and device for head-mounted display and head-mounted display
Technical Field
The present invention relates to the field of head-mounted display technologies, and in particular, to a synchronization method and apparatus for a head-mounted display, and a head-mounted display.
Background
Virtual Reality (VR) is a highly new technology that has emerged in recent years. The virtual reality technology is a key technology for supporting a comprehensive integrated multidimensional information space which combines qualitative and quantitative recognition and perceptual recognition. With the increase of the speed of networks, an internet era based on virtual reality technology is quietly coming, and the internet era will greatly change the production and living modes of people.
The head-mounted display can display a pre-stored virtual reality image to construct a virtual reality environment for the user, but for the sound of the virtual reality environment, a simple sound is played through a loudspeaker or an earphone basically continuing the traditional way. In order to better provide the effect of simulated reality in a virtual reality environment, sound effects in a real simulated reality environment are required. When the position or posture of the head-mounted display changes, the user cannot feel the change through the sound played by the head-mounted display, which causes a problem that the sound of the virtual reality is not uniform with the real environment.
Disclosure of Invention
It is an object of the present invention to provide a new solution to one of the above mentioned problems.
According to a first aspect of the present invention, there is provided a synchronization method for a head mounted display, comprising:
acquiring a virtual reality image and an original sound signal corresponding to the virtual reality image;
determining a sound source corresponding to each original sound signal in the virtual reality image;
detecting the posture of the head-mounted display and the position information of the head-mounted display relative to each sound source;
adjusting the parameters of each original sound signal according to the attitude and all the position information to obtain a current sound signal;
and controlling a sound generating device to play the current sound signal.
Optionally, the parameter includes at least one of intensity, frequency, tone and timbre.
Optionally, after detecting the pose of the head-mounted display and the position information of the head-mounted display relative to each of the audio sources, the method further includes:
moving the virtual reality image according to the attitude and the position information to obtain a current virtual reality image;
and controlling a display device to display the current virtual reality image.
Optionally, the determining a sound source corresponding to each original sound signal in the virtual reality image includes:
decomposing an original sound signal at least comprising two sound channels to obtain an original sound channel signal with the corresponding number of the sound channels;
and determining sound sources corresponding to each original sound channel signal one by one.
Optionally, the adjusting the parameter of each original sound signal according to the posture and all the position information to obtain the current sound signal includes:
respectively adjusting the parameters of each original sound channel signal according to the attitude and each position information to obtain a corresponding current sound channel signal;
and combining all the current sound channel signals to obtain the current sound signal.
According to a second aspect of the present invention, there is provided a synchronization apparatus for a head-mounted display, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a virtual reality image and an original sound signal corresponding to the virtual reality image;
the sound source determining module is used for determining a sound source corresponding to each original sound signal in the virtual reality image;
the detection module is used for detecting the posture of the head-mounted display and the position information of the head-mounted display relative to each sound source;
the adjusting module is used for adjusting the parameters of each original sound signal according to the attitude and all the position information to obtain a current sound signal;
and the sound production control module is used for controlling the sound production device to play the current sound signal.
Optionally, the parameter includes at least one of intensity, frequency, tone and timbre.
Optionally, the apparatus further comprises:
the moving module is used for carrying out moving processing on the virtual reality image according to the posture and the position information to obtain a current virtual reality image;
and the display control module is used for controlling the display device to display the current virtual reality image.
Optionally, the sound source determining module includes:
the system comprises a decomposition unit, a processing unit and a processing unit, wherein the decomposition unit is used for decomposing an original sound signal at least comprising two sound channels to obtain an original sound channel signal with the number of the corresponding sound channels;
and the determining unit is used for determining the sound sources corresponding to the original sound channel signals one by one.
Optionally, the adjusting module includes:
the adjusting unit is used for respectively adjusting the parameters of each original sound channel signal according to the posture and each position information to obtain a corresponding current sound channel signal;
and the merging unit is used for merging all the current sound channel signals to obtain the current sound signal.
According to a third aspect of the present invention there is provided a head mounted display comprising a synchronization apparatus according to the second aspect of the present invention.
According to a fourth aspect of the present invention there is provided a head mounted display comprising a memory for storing instructions for controlling the processor to operate to perform the synchronization method according to the first aspect of the present invention and a processor.
The method and the device have the advantages that the real sound effect can be restored in the virtual reality environment through the embodiment of the invention, the original sound signal can be dynamically adjusted according to the information such as the position and the posture of the user, the synchronization of the sound in the virtual reality system and the real environment is realized, the purpose of simulating the sound effect of the real environment is achieved, and the user experience is improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram of one embodiment of a synchronization method for a head mounted display according to the present invention;
FIG. 2 is a flow diagram of another embodiment of a synchronization method for a head mounted display according to the present invention;
FIG. 3 is a block diagram of an embodiment of a synchronization apparatus for a head-mounted display according to the present invention;
FIG. 4 is a block diagram of another embodiment of a synchronization apparatus for a head-mounted display according to the present invention;
fig. 5 is a block schematic diagram of an implementation structure of a head-mounted display according to the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In order to solve the problem that when the position or the posture of the head-mounted display changes, a user cannot feel the change through the sound played by the head-mounted display in the prior art, a synchronization method for the head-mounted display is provided. The head-mounted display can be an all-in-one machine or a split machine.
Fig. 1 is a flowchart of an implementation method of a synchronization method for a head-mounted display according to the present invention.
According to fig. 1, the synchronization method comprises the following steps:
step S110 is to acquire a virtual reality image and an original sound signal corresponding to the virtual reality image.
The virtual reality image can be acquired by a camera of the head-mounted display in advance, or can be downloaded by the head-mounted display from a network; the original sound signal can be collected by a microphone of the head-mounted display in advance, downloaded from a network, or collected in real time through the microphone. The virtual reality images and the original sound signals have one-to-one correspondence, and any virtual reality image at least has one original sound signal corresponding to the virtual reality image.
The virtual reality image and the original sound signal are obtained in no sequence, or the original sound signal and the virtual reality image corresponding to the original sound signal are obtained. For example, a lookup table reflecting the correspondence between the virtual reality image and the original sound signal may be stored in advance. Searching a comparison table through the obtained virtual reality image to determine an original sound signal corresponding to the virtual reality image; or by acquiring the original sound signal, looking up the comparison table to determine the virtual reality image corresponding to the original sound signal.
Further, after step S110 is executed, the display device may be controlled to display the virtual reality image. The virtual reality environment is provided for the user, and the user experience is improved.
In step S120, a sound source corresponding to each original sound signal in the virtual reality image is determined.
Specifically, the original sound signal corresponding to each sound source in the virtual reality image may be stored in advance, and the corresponding original sound signal and sound source may be labeled, so that the original sound signal and sound source may be matched when step S120 is performed.
If the original sound signal is a monaural signal, then there is one sound source in the virtual reality image that corresponds to the original sound signal. In this way, a sound source corresponding to the monophonic original sound signal can be simulated in space, this simulated sound source corresponding to the position of the sound source in the virtual reality image.
If the original sound signal is a multi-channel signal, that is, the original sound signal includes at least two channels, and the number of sound sources corresponding to the original sound signal is the same as the number of channels of the original sound signal, in this case, the step S120 may include the following steps S121 to S122:
step S121, decomposing an original sound signal including at least two channels to obtain an original channel signal corresponding to the number of channels.
Further, for an original sound signal including two or more channels, the original sound signal may be decomposed into a plurality of original channel signals according to the number and definition of channels, and the number of channels is equal to the number of original channel signals.
In step S122, sound sources corresponding to each original channel signal are determined.
For example, each original sound channel signal can simulate a sound source in space, and the simulated sound sources are in one-to-one correspondence with the positions of the sound sources in the virtual reality image.
In step S130, the posture of the head mount display and the position information of the head mount display with respect to each sound source are detected.
The location information includes: the distance, angle, direction, etc. of the head mounted display relative to each sound source in the virtual reality image.
Specifically, in the embodiment of the present invention, the gesture sensor for detecting the gesture of the head-mounted display includes, but is not limited to: the sensor system comprises an inertial measurement unit, a wearable position capture sensor or a head recognition sensor and the like, wherein the inertial measurement unit at least comprises an acceleration sensor and a gyroscope and can also comprise a geomagnetic sensor. The spatial sensor for detecting the position information of the head-mounted display relative to each audio source includes, but is not limited to, an inertial measurement unit or a camera.
Step S140, adjusting the parameters of each original sound signal according to the posture and all the position information to obtain the current sound signal.
The parameter includes at least one of intensity, frequency, pitch, and timbre.
Specifically, the step S140 includes processes of receiving the original sound signal, the posture of the head-mounted display and the position information of the head-mounted display relative to all sound sources, adjusting the parameters of the original sound signal, and outputting the current sound signal, which may be completed in the sound effect synthesizer of the head-mounted display system.
Furthermore, the gesture of the head-mounted display and the frequency of the position information of the head-mounted display relative to all sound sources can be received in a customized manner according to the needs of the user, for example, but not limited to, the data can be received once every second, so that as long as the position or the gesture of the user changes, the data can change, and finally, the sound effect heard by the user wearing the head-mounted display can also change, so that the real-time performance of the sound effect change can be improved.
Corresponding to the case where the original sound signal is a multi-channel signal in the above embodiment, the step S140 includes the following steps S141 and S142.
Step S141, adjusting parameters of each original channel signal according to the pose and all position information to obtain a current channel signal.
Step S142, merging all the current channel signals to obtain the current sound signal.
In an embodiment of the present invention, the sound generating device for playing the sound signal may be a speaker or an earphone. In case the sound emitting device is a loudspeaker, all current channel signals may be combined into one current sound signal. Under the condition that the sound production device is an earphone, all current sound channels can be combined to obtain a current sound signal of a left sound channel and a current sound signal of a right sound channel, the current sound signal of the left sound channel is output to a left ear earphone, and the current sound signal of the right sound channel is output to a right ear earphone. Thus, the sound heard by the left and right ears of the user is a mixture of multiple sounds throughout the environment, with stereo sound with surround sound effect.
According to the position information of the user and the change of the position information among the sound sources, the intensity of part of original sound channel signals is weakened, the intensity of part of original sound channel signals is strengthened, and part of original sound channel signals are changed due to the interference of other original sound channel signals. This may result in an increase in the current sound signal strength of the left channel and a decrease in the current sound signal strength of the right channel.
On the basis, the embodiment of the invention can simulate the sense of reverberation and propagation of the same sound according to the environment type (such as a small room or an open scene) of the virtual reality image, automatically adjust the effect of the sound through the position relation of a sound source and an auditory party, and dynamically adjust some sounds (such as the adjustment of human voice) in some environments (such as a movie theater). Embodiments of the present invention may also process the sound signal according to the characteristics of sound diffusion, decrement, echo, etc., and the characteristics of object collision (such as wall) and combination (such as several sounds will affect each other) during the sound propagation process. Embodiments of the present invention can also perform different weighting gains or attenuations on the sound signals by differentiating between the left and right channels, so that the user really feels the position change between the sound sources.
If the original sound signal is in accordance with the predefined definition, even more attributes can be defined to achieve a more realistic simulation effect. For example, a conversation with two persons in one voice may be given a defined attribute (e.g., emotion) to the two conversations in addition to the attributes of the voice itself, and then the voice may be processed for the self-defined attribute as needed. For example, it is also possible to perform a separate processing of the voice in the vocal frequency range, blur at crying, increase the partial frequency at anger, and the like to achieve a more realistic effect.
Step S150, controlling the sound generating device to play the current sound signal.
The sound generating device may be, but is not limited to, a speaker of a head-mounted display, an earphone, or an electronic device such as a speaker.
Therefore, the real sound effect can be restored in the virtual reality environment through the embodiment of the invention, and the original sound signal can be dynamically adjusted according to the information of the position, the posture and the like of the user, so that the aim of simulating the sound effect of the real environment is achieved, and the user experience is improved.
Further, the embodiment of the present invention may freely transmit different sounds according to different use environments, and may transmit desired parameters from different devices to achieve desired effects, so that the application range of the embodiment of the present invention may be expanded.
Fig. 2 is a flowchart of another embodiment of a synchronization method for a head mounted display according to the present invention.
After step S130, the synchronization method of the present invention further includes the steps shown in fig. 2:
and step S210, performing movement processing on the virtual reality image according to the posture and the position information to obtain the current virtual reality image.
For example, when the user moves to the left, the virtual reality image may be processed to move to the right at the same time to achieve synchronization of the current virtual reality environment with the real environment.
And step S220, controlling the display device to display the current virtual reality image.
Therefore, the synchronization of sound and images in the virtual reality environment is realized, a more real effect is provided for a user, and the user experience is further improved.
Corresponding to the method, the invention also provides a synchronization device for the head-mounted display. Fig. 3 is a block schematic diagram of an implementation structure of a synchronization apparatus for a head-mounted display according to the present invention.
Referring to fig. 3, the synchronization apparatus 300 includes an obtaining module 310, an audio source determining module 320, a detecting module 330, an adjusting module 340, and an utterance control module 350.
The acquiring module 310 is used for acquiring a virtual reality image and an original sound signal corresponding to the virtual reality image.
The sound source determining module 320 is configured to determine a sound source corresponding to each original sound signal in the virtual reality image.
The detection module 330 is used for detecting the posture of the head-mounted display and the position information of the head-mounted display relative to each sound source.
The adjusting module 340 is configured to adjust parameters of each original sound signal according to the posture and all position information to obtain a current sound signal. Wherein the parameter comprises at least one of intensity, frequency, pitch, and timbre.
The sound generation control module 350 is used for controlling the sound generation device to play the current sound signal.
Fig. 4 is a block schematic diagram of another implementation structure of the synchronization apparatus for the head-mounted display according to the present invention.
As shown in fig. 4, the apparatus 300 further includes a moving module 410 and a display control module 420. The moving module 410 is configured to perform moving processing on the virtual reality image according to the posture and the position information to obtain a current virtual reality image; the display control module 420 is configured to control the display device to display a current virtual reality image.
Further, the sound source determining module 320 includes a decomposition unit 321 and a determining unit 322, where the decomposition unit 321 is configured to decompose an original sound signal including at least two channels to obtain original channel signals corresponding to the number of channels; the determining unit 322 is configured to determine the sound sources corresponding to each original channel signal.
On this basis, the adjusting module 340 includes an adjusting unit 341 and a combining unit 342, where the adjusting unit 341 is configured to adjust the parameter of each original channel signal according to the posture and each position information to obtain a corresponding current channel signal; the merging unit 342 is configured to merge all current channel signals to obtain a current sound signal.
The present invention also provides a head-mounted display which, in one aspect, comprises the synchronization apparatus 300 for a head-mounted display as described above.
Fig. 5 is a block schematic diagram of an implementation structure of the head-mounted display according to another aspect of the present invention.
According to fig. 5, the head-mounted display 500 comprises a memory 501 and a processor 502, the memory 501 being used for storing instructions for controlling the processor 502 to operate for performing the synchronization method for the head-mounted display described above.
The processor 502 may be, for example, a central processing unit CPU, a microprocessor MCU, or the like. The memory 501 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like.
In addition, according to fig. 5, the head-mounted display 500 further comprises an interface device 503, an input device 504, a display device 505, a communication device 506, a speaker 507, a microphone 508, etc. Although a plurality of devices are shown in fig. 5, the head mounted display of the present invention may relate to only some of the devices, for example, the processor 501, the memory 502, the speaker 507, and the like.
The communication device 506 can perform wired or wireless communication, for example.
The interface device 503 includes, for example, a headphone jack, a USB interface, and the like.
The input device 504 may include, for example, a touch screen, a key, and the like.
The display device 505 is, for example, a liquid crystal display panel, a touch panel, or the like.
The above embodiments mainly focus on differences from other embodiments, but it should be clear to those skilled in the art that the above embodiments can be used alone or in combination with each other as needed.
The embodiments in the present disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments, but it should be clear to those skilled in the art that the embodiments described above can be used alone or in combination with each other as needed. In addition, for the device embodiment, since it corresponds to the method embodiment, the description is relatively simple, and for relevant points, refer to the description of the corresponding parts of the method embodiment. The system embodiments described above are merely illustrative, in that modules illustrated as separate components may or may not be physically separate.
The present invention may be an apparatus, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (12)

1. A synchronization method for a head-mounted display, comprising:
acquiring a virtual reality image and an original sound signal corresponding to the virtual reality image;
determining a sound source corresponding to each original sound signal in the virtual reality image;
detecting a pose of the head-mounted display and position information of the head-mounted display relative to each of the audio sources, the position information including: the distance, angle, direction of the head-mounted display relative to each of the sound sources in the virtual reality image;
adjusting the parameters of each original sound signal according to the attitude and all the position information to obtain a current sound signal;
and controlling a sound generating device to play the current sound signal.
2. The synchronization method of claim 1, wherein the parameters include at least one of intensity, frequency, pitch, and timbre.
3. The synchronization method according to claim 1, wherein the detecting the gesture of the head-mounted display and the position information of the head-mounted display relative to each of the audio sources further comprises:
moving the virtual reality image according to the attitude and the position information to obtain a current virtual reality image;
and controlling a display device to display the current virtual reality image.
4. The synchronization method according to claim 1, wherein the determining a sound source corresponding to each of the original sound signals in the virtual reality image comprises:
decomposing an original sound signal at least comprising two sound channels to obtain an original sound channel signal with the corresponding number of the sound channels;
and determining sound sources corresponding to each original sound channel signal one by one.
5. The synchronization method according to claim 4, wherein the adjusting the parameters of each of the original sound signals according to the posture and all the position information to obtain a current sound signal comprises:
respectively adjusting the parameters of each original sound channel signal according to the attitude and each position information to obtain a corresponding current sound channel signal;
and combining all the current sound channel signals to obtain the current sound signal.
6. A synchronization apparatus for a head-mounted display, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a virtual reality image and an original sound signal corresponding to the virtual reality image;
the sound source determining module is used for determining a sound source corresponding to each original sound signal in the virtual reality image;
a detection module for detecting the posture of the head-mounted display and the position information of the head-mounted display relative to each of the sound sources, the position information including: the distance, angle, direction of the head-mounted display relative to each of the sound sources in the virtual reality image;
the adjusting module is used for adjusting the parameters of each original sound signal according to the attitude and all the position information to obtain a current sound signal;
and the sound production control module is used for controlling the sound production device to play the current sound signal.
7. The synchronization device of claim 6, wherein the parameter comprises at least one of intensity, frequency, pitch, and timbre.
8. The synchronization apparatus according to claim 6, characterized in that the apparatus further comprises:
the moving module is used for carrying out moving processing on the virtual reality image according to the posture and the position information to obtain a current virtual reality image;
and the display control module is used for controlling the display device to display the current virtual reality image.
9. The synchronization apparatus of claim 6, wherein the audio source determination module comprises:
the system comprises a decomposition unit, a processing unit and a processing unit, wherein the decomposition unit is used for decomposing an original sound signal at least comprising two sound channels to obtain an original sound channel signal with the number of the corresponding sound channels;
and the determining unit is used for determining the sound sources corresponding to the original sound channel signals one by one.
10. The synchronization apparatus of claim 9, wherein the adjustment module comprises:
the adjusting unit is used for respectively adjusting the parameters of each original sound channel signal according to the posture and each position information to obtain a corresponding current sound channel signal;
and the merging unit is used for merging all the current sound channel signals to obtain the current sound signal.
11. A head-mounted display comprising a synchronization device according to any one of claims 6 to 10.
12. A head-mounted display comprising a memory and a processor, the memory for storing instructions for controlling the processor to operate to perform the synchronization method of any of claims 1-5.
CN201710218231.3A 2017-04-05 2017-04-05 Synchronization method and device for head-mounted display and head-mounted display Active CN107168518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710218231.3A CN107168518B (en) 2017-04-05 2017-04-05 Synchronization method and device for head-mounted display and head-mounted display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710218231.3A CN107168518B (en) 2017-04-05 2017-04-05 Synchronization method and device for head-mounted display and head-mounted display

Publications (2)

Publication Number Publication Date
CN107168518A CN107168518A (en) 2017-09-15
CN107168518B true CN107168518B (en) 2020-06-23

Family

ID=59849842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710218231.3A Active CN107168518B (en) 2017-04-05 2017-04-05 Synchronization method and device for head-mounted display and head-mounted display

Country Status (1)

Country Link
CN (1) CN107168518B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019113939A1 (en) * 2017-12-15 2019-06-20 歌尔科技有限公司 Vr device and experience control method, system, apparatus, and storage medium thereof
CN109086029B (en) * 2018-08-01 2021-10-26 北京奇艺世纪科技有限公司 Audio playing method and VR equipment
CN111050271B (en) * 2018-10-12 2021-01-29 北京微播视界科技有限公司 Method and apparatus for processing audio signal
CN109582273A (en) 2018-11-26 2019-04-05 联想(北京)有限公司 Audio-frequency inputting method, electronic equipment and audio output device
CN109814710B (en) * 2018-12-27 2022-05-13 青岛小鸟看看科技有限公司 Data processing method and device and virtual reality equipment
US10846898B2 (en) * 2019-03-28 2020-11-24 Nanning Fugui Precision Industrial Co., Ltd. Method and device for setting a multi-user virtual reality chat environment
CN111427447B (en) * 2020-03-04 2023-08-29 青岛小鸟看看科技有限公司 Virtual keyboard display method, head-mounted display device and system
CN113467603B (en) * 2020-03-31 2024-03-08 抖音视界有限公司 Audio processing method and device, readable medium and electronic equipment
CN114025287B (en) * 2021-10-29 2023-02-17 歌尔科技有限公司 Audio output control method, system and related components
CN114630145A (en) * 2022-03-17 2022-06-14 腾讯音乐娱乐科技(深圳)有限公司 Multimedia data synthesis method, equipment and storage medium
CN114815256B (en) * 2022-04-15 2023-10-03 青岛虚拟现实研究院有限公司 Screen parameter adjustment method, device and storage medium of virtual reality head-mounted device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8183997B1 (en) * 2011-11-14 2012-05-22 Google Inc. Displaying sound indications on a wearable computing system
CN106023983B (en) * 2016-04-27 2019-11-05 Oppo广东移动通信有限公司 Multi-user voice exchange method and device based on Virtual Reality scene
CN106020440A (en) * 2016-05-05 2016-10-12 西安电子科技大学 Emotion interaction based Peking Opera teaching system
CN105916096B (en) * 2016-05-31 2018-01-09 努比亚技术有限公司 A kind of processing method of sound waveform, device, mobile terminal and VR helmets

Also Published As

Publication number Publication date
CN107168518A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
CN107168518B (en) Synchronization method and device for head-mounted display and head-mounted display
US10123140B2 (en) Dynamic calibration of an audio system
JP6961007B2 (en) Recording virtual and real objects in mixed reality devices
US10126823B2 (en) In-vehicle gesture interactive spatial audio system
EP3343349B1 (en) An apparatus and associated methods in the field of virtual reality
CN111916039B (en) Music file processing method, device, terminal and storage medium
CN106790940B (en) Recording method, recording playing method, device and terminal
US9986362B2 (en) Information processing method and electronic device
JP6764490B2 (en) Mediated reality
US10757528B1 (en) Methods and systems for simulating spatially-varying acoustics of an extended reality world
CN112165648B (en) Audio playing method, related device, equipment and storage medium
WO2021169689A1 (en) Sound effect optimization method and apparatus, electronic device, and storage medium
US20220180889A1 (en) Audio bandwidth reduction
CN114026885A (en) Audio capture and rendering for augmented reality experience
CN114270877A (en) Non-coincident audiovisual capture system
CN106598245B (en) Multi-user interaction control method and device based on virtual reality
CN114339582B (en) Dual-channel audio processing method, device and medium for generating direction sensing filter
WO2022054900A1 (en) Information processing device, information processing terminal, information processing method, and program
JP2015050493A (en) Information processing unit, av receiver, and program
Thery et al. Impact of the visual rendering system on subjective auralization assessment in VR
CN114449341B (en) Audio processing method and device, readable medium and electronic equipment
CN114630240B (en) Direction filter generation method, audio processing method, device and storage medium
US11595730B2 (en) Signaling loudness adjustment for an audio scene
KR102111990B1 (en) Method, Apparatus and System for Controlling Contents using Wearable Apparatus
CN115529534A (en) Sound signal processing method and device, intelligent head-mounted equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant