CN105916096B - A kind of processing method of sound waveform, device, mobile terminal and VR helmets - Google Patents
A kind of processing method of sound waveform, device, mobile terminal and VR helmets Download PDFInfo
- Publication number
- CN105916096B CN105916096B CN201610379135.2A CN201610379135A CN105916096B CN 105916096 B CN105916096 B CN 105916096B CN 201610379135 A CN201610379135 A CN 201610379135A CN 105916096 B CN105916096 B CN 105916096B
- Authority
- CN
- China
- Prior art keywords
- sound
- acoustic signals
- source
- angle
- microphone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 230000003287 optical effect Effects 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims description 27
- 230000004044 response Effects 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 22
- 230000008030 elimination Effects 0.000 claims description 9
- 238000003379 elimination reaction Methods 0.000 claims description 9
- 238000012546 transfer Methods 0.000 claims description 9
- 241000209140 Triticum Species 0.000 claims 1
- 235000021307 Triticum Nutrition 0.000 claims 1
- 238000004891 communication Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 16
- 230000015654 memory Effects 0.000 description 16
- 238000010295 mobile communication Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000007654 immersion Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 208000012886 Vertigo Diseases 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000011173 large scale experimental method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 241001062009 Indigofera Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 239000009730 ganji Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000002463 transducing effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
- H04S1/005—For headphones
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
- Telephone Function (AREA)
Abstract
A kind of processing method of sound waveform, including:Acoustic signals are gathered, obtain the source of sound angle-data of the acoustic signals;According to the optical parametric of the optical mirror slip of the virtual reality helmet, determine that the source of sound of the acoustic signals to the actual range of specified object, determines the actual range and the source of sound to the proportionate relationship of the default distance of the specified object;The acoustic signals are corrected in real time according to the source of sound angle-data and the proportionate relationship.Distance and the relative angle that sound producing body audio plays are adjusted by this programme in real time by realizing according to the optical parametric of VR helmets, make user seen by person and be in same position with the sound producing body that ear is heard, acoustic image deviation will not be produced, it is more excellent so as to reach Consumer's Experience.
Description
Technical field
The application relates to, but are not limited to communication technical field, and espespecially a kind of processing method of sound waveform, device, movement are eventually
End and VR helmets.
Background technology
Current technology, it is empty in the virtual scene that VR (Virtual Reality, virtual reality) helmet provides
Intending the sound that object is sent will not be associated with the optical parametric of current VR helmets, and its sounding orientation and size will not roots
Adjusted in real time in the image distance that user's human eye is experienced according to sound producing body.During so as to cause to play audio, human ear experiences hair
There is a certain degree of deviation in the acoustic distance that sound object is sent, and the image distance of sound producing body seen by person, and can not basis
The image distance that human eye is experienced adjusts in real time, the user in being experienced for VR, and this factor largely have impact on the heavy of user
Soak sense degree.Obviously, technology can not meet the growing feeling of immersion degree demand of user at present, and acoustic image deviation easily causes
Sense of fatigue and spinning sensation, cause bad Consumer's Experience.
The content of the invention
It is real the embodiments of the invention provide a kind of processing method of sound waveform, device, mobile terminal and VR helmets
Now adjust the distance and relative angle of sound producing body audio broadcasting in real time according to the optical parametric of VR helmets, make user people
The sound producing body that arrive soon and ear is heard is in same position, will not produce acoustic image deviation.
The embodiments of the invention provide a kind of processing method of sound waveform, suitable for virtual reality helmet, including:
Acoustic signals are gathered, obtain the source of sound angle-data of the acoustic signals;
According to the optical parametric of the optical mirror slip of the virtual reality helmet, determine that the source of sound of the acoustic signals arrives
The actual range of specified object, determine that the ratio of the actual range and the source of sound to the default distance of the specified object is closed
System;
The acoustic signals are corrected in real time according to the source of sound angle-data and the proportionate relationship.
Alternatively, the collection acoustic signals, the source of sound angle-data of the acoustic signals is obtained, including:
Start the sound-recording function of mobile terminal, gather acoustic signals in real time;
By being arranged on the microphone array formed in the specified object using two relative angles as 180 ° of microphone
Gather between the frequency responses of the acoustic signals, microphone intensity difference between time difference and microphone;
Compare default source of sound angle-data storehouse, according between the frequency response, microphone between time difference and microphone it is strong
Degree difference obtains the source of sound angle-data of the acoustic signals.
Alternatively, the generation method in the source of sound angle-data storehouse is:
Under noise elimination laboratory environment, using the midpoint of microphone that described two relative angles are 180 ° as the center of circle, same
On the upper each three-dimensional coordinate point of heart circle, [20,20000] Hz swept-frequency signal is played;
Start the sound-recording function of mobile terminal, the sound on each three-dimensional coordinate point is gathered by the microphone array
Ripple signal;
The time between source of sound angle and frequency response, microphone is established by the acoustic signals on each three-dimensional coordinate point
Corresponding relation between difference and microphone between intensity difference, form source of sound angle-data storehouse.
Alternatively, it is described that the acoustic signals are corrected in real time according to the source of sound angle-data, including:
The acoustic signals and the source of sound angle-data are sent to listening angle rotary module;
Default head related transfer function is called, obtains first benefit of the acoustic signals in phase, frequency and intensity
Repay value;
The acoustic signals are corrected in real time according to first offset,
It is described that the acoustic signals are corrected in real time according to the proportionate relationship, including:
Second offset of the acoustic signals in intensity is determined according to the proportionate relationship;
The acoustic signals are corrected in real time according to second offset.
The embodiment of the present invention also provides a kind of processing unit of sound waveform, including:
Acquisition module, for gathering acoustic signals, obtain the source of sound angle-data of the acoustic signals;
Determining module, for the optical coefficient of the optical mirror slip according to virtual reality helmet, determine the sound wave letter
Number source of sound to specified object actual range, determine the actual range and the source of sound to the specified object acquiescence away from
From proportionate relationship;
Correcting module, it is real-time for being carried out according to the source of sound angle-data and the proportionate relationship to the acoustic signals
Amendment.
Alternatively, the acquisition module includes:
Start recoding unit, for starting the sound-recording function of mobile terminal, gather acoustic signals in real time;
Parameter acquisition unit, for by being formed using two relative angles as 180 ° of microphone in the specified object
Microphone array gathers between the frequency response of the acoustic signals, microphone intensity difference between time difference and microphone;
Unit is compareed, for compareing default source of sound angle-data storehouse, according to the time difference between the frequency response, microphone
Intensity difference obtains the source of sound angle-data of the acoustic signals between microphone.
Alternatively, the generation method in the source of sound angle-data storehouse is:
Under noise elimination laboratory environment, using the midpoint of microphone that described two relative angles are 180 ° as the center of circle, same
On the upper each three-dimensional coordinate point of heart circle, [20,20000] Hz swept-frequency signal is played;
Start the sound-recording function of mobile terminal, the sound on each three-dimensional coordinate point is gathered by the microphone array
Ripple signal;
The time between source of sound angle and frequency response, microphone is established by the acoustic signals on each three-dimensional coordinate point
Corresponding relation between difference and microphone between intensity difference, form source of sound angle-data storehouse.
Alternatively, the correcting module includes:
Transmitting element, for the acoustic signals and the source of sound angle-data to be sent to listening angle rotary module;
Call unit, for calling default head related transfer function, the acoustic signals are obtained in phase, frequency and strong
The first offset on degree;
First compensating unit, for being corrected in real time to the acoustic signals according to first offset,
Determining unit, for determining second offset of the acoustic signals in intensity according to the proportionate relationship;
Second compensating unit, for being corrected in real time to the acoustic signals according to second offset.
The embodiment of the present invention also provides a kind of mobile terminal, including recording module, includes the place of above-mentioned sound waveform
Manage device.
The embodiment of the present invention also provides a kind of virtual reality helmet, including optical mirror slip, in addition to above-mentioned movement
Terminal.
To sum up, offer of the embodiment of the present invention a kind of processing method of sound waveform, device, mobile terminal and VR, which are worn, sets
It is standby, realize according to the optical parametric of VR helmets to adjust the distance and relative angle of the broadcasting of sound producing body audio in real time, make
User is seen by person to be in same position with the sound producing body that ear is heard, acoustic image deviation will not be produced, so as to reach user
Experience more excellent, feeling of immersion more preferably effect.
Brief description of the drawings
Accompanying drawing is used for providing further understanding technical solution of the present invention, and a part for constitution instruction, with this
The embodiment of application is used to explain technical scheme together, does not form the limitation to technical solution of the present invention.
Fig. 1 is the hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the schematic diagram of the virtual scene shown in virtual reality helmet;
Fig. 4 is the imaging schematic diagram of virtual reality helmet;
Fig. 5 is a kind of flow chart of the processing method of sound waveform of the embodiment of the present invention one;
Fig. 6 is the method flow diagram of step S10 in Fig. 5.
Fig. 7 is the flow chart of the generation method in the source of sound angle-data storehouse of the embodiment of the present invention one;
Fig. 8 is the sound collecting schematic diagram of the embodiment of the present invention one;
Fig. 9 is the flow chart corrected in real time to acoustic signals according to source of sound angle-data of the embodiment of the present invention one;
Figure 10 is the flow chart corrected in real time to acoustic signals under a proportional relationship of the embodiment of the present invention one;
Figure 11 is a kind of schematic diagram of the processing unit of sound waveform of the embodiment of the present invention two;
Figure 12 is a kind of schematic diagram of the processing unit of sound waveform of the embodiment of the present invention three;
Figure 13 is a kind of schematic diagram of the processing unit of sound waveform of the embodiment of the present invention four;
Figure 14 is the schematic diagram of the mobile terminal of the embodiment of the present invention;
Figure 15 is the schematic diagram of the virtual reality helmet of the embodiment of the present invention.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Describe to realize the mobile terminal of each embodiment of the present invention referring now to accompanying drawing.In follow-up description, use
For represent element such as " module ", " part " or " unit " suffix only for be advantageous to the present invention explanation, itself
Not specific meaning.Therefore, " module " can be used mixedly with " part ".
Mobile terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as moving
Phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet personal computer), PMP
The mobile terminal of (portable media player), guider etc. and such as digital TV, desktop computer etc. are consolidated
Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that except being used in particular for moving
Outside the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is the hardware configuration signal for the mobile terminal for realizing each embodiment of the present invention.
Mobile terminal 100 can include wireless communication unit 110, A/V (audio/video) input block 120, user's input
Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190
Etc..Fig. 1 shows the mobile terminal with various assemblies, it should be understood that being not required for implementing all groups shown
Part.More or less components can alternatively be implemented.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more assemblies, and it allows mobile terminal 100 and wireless communication system
Or the radio communication between network.For example, wireless communication unit can include broadcasting reception module 111, mobile communication module
112nd, it is at least one in wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server
Relevant information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent
The broadcast singal and/or broadcast related information that the server or reception of broadcast singal and/or broadcast related information generate before
And send it to the server of terminal.Broadcast singal can include TV broadcast singals, radio signals, data broadcasting
Signal etc..Moreover, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Closing information can also provide via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 receives.Broadcast singal can exist in a variety of manners, for example, it can be with DMB (DMB) electronics
Program guide (EPG), digital video broadcast-handheld (DVB-H) electronic service guidebooks (ESG) etc. form and exist.Broadcast
Receiving module 111 can be broadcasted by using various types of broadcast system reception signals.Especially, broadcasting reception module 111
Can be wide by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video
Broadcast-hold (DVB-H), forward link media (MediaFLO@) Radio Data System, received terrestrial digital broadcasting integrated service
Etc. (ISDB-T) digit broadcasting system receives digital broadcasting.Broadcasting reception module 111, which may be constructed such that, to be adapted to provide for extensively
Broadcast the various broadcast systems of signal and above-mentioned digit broadcasting system.Via broadcasting reception module 111 receive broadcast singal and/
Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one in server and/or receive from it radio signal.Such radio signal can lead to including voice
Talk about signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant
ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module for checking or obtaining the positional information of mobile terminal.Location information module
Typical case be GPS (global positioning system).According to current technology, GPS module 115 calculates and comes from three or more satellites
Range information and correct time information and for the Information application triangulation of calculating, so as to according to longitude, latitude
Highly accurately calculate three-dimensional current location information.Currently, defended for the method for calculation position and temporal information using three
Star and the position calculated by using other satellite correction and the error of temporal information.In addition, GPS module 115
Can be by Continuous plus current location information in real time come calculating speed information.
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include the He of camera 121
Microphone 1220, camera 121 in video acquisition mode or image capture mode by image capture apparatus obtain static map
The view data of piece or video is handled.Picture frame after processing may be displayed on display unit 151.At camera 121
Picture frame after reason can be stored in memory 160 (or other storage mediums) or be carried out via wireless communication unit 110
Send, two or more cameras 1210 can be provided according to the construction of mobile terminal.Microphone 122 can be in telephone relation mould
Sound (voice data) is received via microphone in formula, logging mode, speech recognition mode etc. operational mode, and can be incited somebody to action
Such acoustic processing is voice data.Audio (voice) data after processing can be changed in the case of telephone calling model
For the output of the form of mobile communication base station can be sent to via mobile communication module 112.Microphone 122 can implement all kinds
Noise eliminate (or suppress) algorithm with eliminate (or suppression) during receiving and sending audio signal caused noise or
Person disturbs.
User input unit 130 can control each of mobile terminal according to the order generation key input data that user inputs
Kind operation.User input unit 130 allows user to input various types of information, and can include keyboard, metal dome, touch
Plate (for example, sensitive component of detection change of resistance, pressure, electric capacity etc. caused by being touched), roller, rocking bar etc.
Deng.Especially, when touch pad is superimposed upon on display unit 151 in the form of layer, touch-screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 100, (for example, mobile terminal 100 opens or closes shape
State), the presence or absence of the contact (that is, touch input) of the position of mobile terminal 100, user for mobile terminal 100, mobile terminal
100 orientation, the acceleration or deceleration movement of mobile terminal 100 and direction etc., and generate for controlling mobile terminal 100
The order of operation or signal.For example, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing unit 140 can sense
The sliding-type phone is opening or closing.In addition, sensing unit 140 can detect power subsystem 190 whether provide electric power or
Whether person's interface unit 170 couples with external device (ED).Sensing unit 140, which can include proximity transducer 1410, to be combined below
This is described touch-screen.
Interface unit 170 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Identification module can be that storage is used to verify that user uses each of mobile terminal 100
Plant information and subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) can be included
Etc..In addition, the device with identification module can (hereinafter referred to as " identification device ") take the form of smart card, therefore, know
Other device can be connected via port or other attachment means with mobile terminal 100.Interface unit 170 can be used for reception and come from
The input (for example, data message, electric power etc.) of external device (ED) and the input received is transferred in mobile terminal 100
One or more elements can be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 may be used as allowing by it by electricity
Power provides to the path of mobile terminal 100 from base or may be used as allowing to pass through it from the various command signals that base inputs
It is transferred to the path of mobile terminal.It may be used as being used to identify that mobile terminal is from the various command signals or electric power of base input
The no signal being accurately fitted within base.Output unit 150 is configured to defeated with the offer of vision, audio and/or tactile manner
Go out signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 can include display
Unit 151, dio Output Modules 152, alarm unit 153 etc..
Display unit 151 may be displayed in mobile terminal 100 information handled.For example, when mobile terminal 100 is in electricity
When talking about call mode, display unit 151 can be shown with conversing or other communicating (for example, text messaging, multimedia file
Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern
Or during image capture mode, display unit 151 can show the image of capture and/or the image of reception, show video or figure
UI or GUI of picture and correlation function etc..
Meanwhile when display unit 151 and touch pad in the form of layer it is superposed on one another to form touch-screen when, display unit
151 may be used as input unit and output device.Display unit 151 can include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, and this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired embodiment, mobile terminal 100 can include two or more display units (or other display devices), for example, moving
Dynamic terminal can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detection to touch
Input pressure and touch input position and touch input area.
Dio Output Modules 152 can mobile terminal be in call signal reception pattern, call mode, logging mode,
It is receiving or in memory 160 by wireless communication unit 110 when under the isotypes such as speech recognition mode, broadcast reception mode
The voice data transducing audio signal of middle storage and to export be sound.Moreover, dio Output Modules 152 can provide and movement
The audio output (for example, call signal receives sound, message sink sound etc.) for the specific function correlation that terminal 100 performs.
Dio Output Modules 152 can include loudspeaker, buzzer etc..
Alarm unit 153 can provide output so that event is notified to mobile terminal 100.Typical event can be with
Including calling reception, message sink, key signals input, touch input etc..In addition to audio or video exports, alarm unit
153 can provide output in a different manner with the generation of notification event.For example, alarm unit 153 can be in the form of vibration
There is provided output, when receive calling, message or some it is other enter communication (incoming communication) when, alarm list
Member 153 can provide tactile output (that is, vibrating) to notify to user.Exported by tactile as offer, even in
When the mobile phone of user is in the pocket of user, user also can recognize that the generation of various events.Alarm unit 153
The output of the generation of notification event can be provided via display unit 151 or dio Output Modules 152.
Memory 160 can store software program of the processing performed by controller 180 and control operation etc., Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And memory 160 can store the vibration of various modes on being exported when touching and being applied to touch-screen and audio signal
Data.
Memory 160 can include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, more
Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..Moreover, mobile terminal 100 can be with performing memory by network connection
The network storage device cooperation of 160 store function.
Controller 180 generally controls the overall operation of mobile terminal.For example, controller 180 performs and voice call, data
Communication, video calling etc. related control and processing.In addition, controller 180 can include being used to reproduce (or playback) more matchmakers
The multi-media module 1810 of volume data, multi-media module 1810 can be constructed in controller 180, or be can be structured as and controlled
Device 180 processed separates.Controller 180 can be with execution pattern identifying processing, by the handwriting input performed on the touchscreen or figure
Piece draws input and is identified as character or image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides operation each member
Appropriate electric power needed for part and component.
Various embodiments described herein can be with use such as computer software, hardware or its any combination of calculating
Machine computer-readable recording medium is implemented.Implement for hardware, embodiment described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein
At least one of electronic unit is implemented, and in some cases, such embodiment can be implemented in controller 180.
For software implementation, the embodiment of such as process or function can be with allowing to perform the single of at least one function or operation
Software module is implemented.Software code can by the software application (or program) write with any appropriate programming language Lai
Implement, software code can be stored in memory 160 and be performed by controller 180.
So far, oneself according to its function through describing mobile terminal.Below, for the sake of brevity, will description such as folded form,
Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc., which is used as, to be shown
Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 may be constructed such that using via frame or packet transmission data it is all if any
Line and wireless communication system and satellite-based communication system operate.
The communication system that can be wherein operated according to the mobile terminal of the present invention referring now to Fig. 2 descriptions.
Such communication system can use different air interface and/or physical layer.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to figure 2, cdma wireless communication system can include multiple mobile terminals 100, multiple base stations (BS) 270, base station
Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN)
290 form interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link.
If back haul link can construct according to any of interface that Ganji knows, the interface includes such as E1/T1, ATM, IP,
PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system can include multiple BSC2750 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction
Each subregion of line covering is radially away from BS270.Or each subregion can be by two or more for diversity reception
Antenna covers.Each BS270, which may be constructed such that, supports multiple frequency distribution, and each frequency distribution has specific frequency spectrum
(for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed, which intersects, can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single
BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Or specific BS270 each subregion can be claimed
For multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to the mobile terminal operated in system by broadcsting transmitter (BT) 295
100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 100 to receive the broadcast sent by BT295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps to position multiple mobile terminals
It is at least one in 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that, any number of satellite can be utilized to obtain useful
Location information.GPS module 115 as shown in Figure 1 is generally configured to coordinate with satellite 300 to be believed with obtaining desired positioning
Breath.Substitute GPS tracking techniques or outside GPS tracking techniques, the other of the position that can track mobile terminal can be used
Technology.In addition, at least one gps satellite 300 can optionally or additionally handle satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminals 100
Signal.Mobile terminal 100 generally participates in call, information receiving and transmitting and other types of communication.Certain base station 270 receives each anti-
Handled to link signal in specific BS270.The data of acquisition are forwarded to the BSC275 of correlation.BSC provides call
Resource allocation and the mobile management function of coordination including the soft switching process between BS270.The number that BSC275 will also be received
According to MSC280 is routed to, it provides the extra route service for forming interface with PSTN290.Similarly, PSTN290 with
MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 correspondingly controls BS270 with by forward link signals
It is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
VR helmets provide virtual scene in, user through optical mirror slip observation mobile terminal screen, it is seen that be
One virtual world (virtual world), in this virtual world, the visual only sub-fraction of active user,
Referred to as user's visual range (matched zone).Meanwhile using head-tracking equipment to user currently before virtual world
Enter direction to be tracked, also there is the model of a user's head in virtual world, as the real headwork of user turns to
Or mobile and act, referred to as user's head Matching Model (user ' s matched head).As shown in Figure 3.
Described VR helmets, it should at least assemble following device:Optical mirror slip, the slot of mobile terminal or fixture, sound
Frequency component (wired earphone, bluetooth earphone etc.).
Described mobile terminal, it should at least assemble following device:Display screen, processor (CPU (Central
Processing Unit, CPU), GPU (Graphical Processing Unit, graphics processor) etc.), sound
Frequency output interface (earpiece holes, Bluetooth chip etc.), head orientation tracing equipment (gyroscope etc.).
For the sound producing body in virtual world, current techniques are relative to user's head according to this object in virtual world
The position of Matching Model, to determine the orientation of audio output and power.
It is assumed that two sound producing bodies in virtual world be present, user's visual range, object A phases are all entered
Vector for user's head Matching Model O isObject B is relative to user's head Matching Model O vectorThen thing
Position vector between body AB isThe image of the two objects, by projecting to the display screen of mobile terminal, and pass through VR
The optical mirror slip of helmet, hence into human eye;The sound sent, audio is output to from the audio output interface of mobile terminal
Component, so as to reach human ear.
Due to the presence of the optical mirror slip of VR helmets, A, B object that eye-observation arrives, and the ideal in virtual world
Position, which exists, to come in and go out, and the now audio output of sound producing body, sends still according to the ideal position in virtual world, thus goes out
The problem of having showed acoustic image deviation, it is easy to cause the sense of fatigue and spinning sensation of user.
As shown in figure 4, the position vector that A, B object are imaged at human eye is changed intoWith the focal length of optical mirror slip, amplification
The optical coefficients such as coefficient are closely related.If the distance of optical mirror slip to mobile terminal screen is a, the focal length of optical mirror slip is f, then
Object distance b can be calculated by equation below:
And the calculation formula of relative position is:
In formula, μ represents the projection factor, characterizes the scaling that virtual world projects to display screen;α represents optical mirror slip
Amplification coefficient, be characterized under current object distance, into the virtual image and magnification ratio in kind, as shown in Figure 4.
For this solution it is required that increase the adaptation behaviour to the optical mirror slip parameter of VR helmets on mobile terminals
Make, adjusted in real time for the audio of broadcasting, for example, it is calculated above go out A, B object in human eye place into the position of the virtual image
VectorWith preferable position vector in virtual worldThe difference of angle and distance be present.
It is described in detail below by way of specific embodiment.
Embodiment one
Fig. 5 is a kind of flow chart of the processing method of sound waveform of the embodiment of the present invention, and the method for the present embodiment is applicable
In VR helmets, a kind of processing method of sound waveform of the present embodiment is described with reference to Fig. 5, as shown in figure 5, this reality
Applying the method for example includes:
S10, collection acoustic signals, obtain the source of sound angle-data of the acoustic signals;
S20, the optical parametric according to the optical mirror slip of the virtual reality helmet, determine the sound of the acoustic signals
Source determines the actual range and the source of sound to the ratio of the default distance of the specified object to the actual range of specified object
Example relation;
S30, according to the source of sound angle-data and the proportionate relationship acoustic signals are corrected in real time.
The method of the present embodiment, according to the optical parametric of VR helmets come in real time adjust sound producing body audio broadcasting away from
From and relative angle, the sound producing body that user is seen by person and ear is heard is in same position, it is inclined acoustic image will not to be produced
Difference, feeling of immersion more preferably effect more excellent so as to reach Consumer's Experience.
In the present embodiment, as shown in fig. 6, the step S10 includes:
S11, the sound-recording function for starting mobile terminal, gather acoustic signals in real time;
S12, by being arranged on the microphone formed in the specified object using two relative angles as 180 ° of microphone
Intensity difference between time difference and microphone between the frequency responses of acoustic signals described in array acquisition, microphone;
S13, the default source of sound angle-data storehouse of control, according to time difference and microphone between the frequency response, microphone
Between intensity difference obtain the source of sound angle-data of the acoustic signals.
The specified object is standard human head model.
In the present embodiment, time difference (inter- between the frequency response of the acoustic signals, microphone is passed through
Microphone time difference, abbreviation ITD) intensity difference (inter-microphone between microphone
Intensity difference, abbreviation IID) current source of sound angle can be accurately positioned out.
As shown in fig. 7, in the present embodiment, the generation method in the source of sound angle-data storehouse in the step S13 is:
Step 131, under noise elimination laboratory environment, the midpoint using described two relative angles as 180 ° of microphone is round
The heart, on concentric circles on each three-dimensional coordinate point, play [20,20000] Hz swept-frequency signal;
Step 132, the sound-recording function for starting mobile terminal, each three-dimensional coordinate is come from by microphone array collection
Acoustic signals on point;
Step 133, by the acoustic signals on each three-dimensional coordinate point establish source of sound angle and frequency response, Mike
Corresponding relation between wind between time difference and microphone between intensity difference, form source of sound angle-data storehouse.
In the present embodiment, the source of sound angle-data storehouse pre-establishes before mobile terminal dispatches from the factory, be have " can
Depending on sound field " necessary to the mobile terminal of function.
Under noise elimination laboratory environment, an array is formed by 180 ° of microphone of two relative angles, with this array
Midpoint be the center of circle, on concentric circles on each three-dimensional coordinate point, play [20,20000] Hz swept-frequency signal, pass through microphone
Sound waveform of the array acquisition on each three-dimensional coordinate point.
It gathers the scheme of sound wave as shown in figure 8, using level angle (Azimuth), the elevation angle (elevation) and front and rear
To represent source of sound angle-data;In the case where other two axles are constant, change an angle, the sound of the sound wave of different frequency merely
Should be different, along with ITD, IID, three conditions can be accurately positioned out current source of sound angle.According to different discriminations
Effect, different spacing accuracies can be used between the coordinate points of collection, it is however generally that, it is that precision can meet big absolutely with 1 ° to 5 °
The human ear of most users, which distinguishes, to be required.All after collection, source of sound angle-data storehouse is formed.
Sound source data storehouse preserves in a memory in the mobile terminal, the foundation original as music file is judged.Judging
When, ITD in all angles preserved in ITD, IID and frequency response by left and right acoustic channels, and source of sound angle-data storehouse,
IID and frequency response are compared, and find out the data pressed close to the most, that is, it is that the data are corresponding to represent current music playback angle
Angle, it be 0 degree of front such as to play angle, and the elevation angle is 0 degree.
In the present embodiment, as shown in figure 9, being entered in step S30 according to the source of sound angle-data to the acoustic signals
Row amendment in real time, including:
S310, by the acoustic signals and the source of sound angle-data send to listen to angle rotary module;
S311, default HRTF (Head Related Transfer Function, head related transfer function) is called, obtained
To first offset of the acoustic signals in phase and frequency;
S312, according to first offset acoustic signals are corrected in real time.
In the present embodiment, the HRTF is in the listening angle rotary module being preset in mobile terminal, listens to angle
Rotary module applies mechanically preset HRTF models, obtains the original after the angle-data and original acoustic wave signal that transmit is received
The size that beginning acoustic signals should compensate in phase and frequency, and then original acoustic wave signal is corrected in real time.
The sound wave that sound producing body after being adapted to according to optical parametric is sent is adjusted and compensated according to above-mentioned HRTF models,
Such as listen to angle and be arranged to 30 degree to the left of front, the elevation angle is 45 degree, after receiving above-mentioned setting, listens to angle rotary module and passes through
General HRTF models are inquired about, the guidance standard as sound wave compensation.
This HRTF model is characterized in, takes general mankind's head sizes, diffraction, reflection, the average value of Transfer Parameters, or pin
To the average value or typical value of target group (such as Asian), under noise elimination laboratory environment, using the standard of industry accreditation
Headform is emulated, the carry out adapting operation of the optical mirror slip parameter to VR helmets on mobile terminals is repeated, for broadcasting
The experiment that the audio put is adjusted in real time.The instrument of reception sound wave is simply become emulation headform, same collection
The waveform of the swept-frequency signal of each coordinate points of three dimensions.
Certainly, existing multiple large-scale experiment rooms carried out measurement to this partial data in acoustics educational circles, directly using these
The data of laboratory issue, it is also possible as HRTF models preset in mobile terminal.Angle rotary module is listened to obtain
The size that the original acoustic wave should compensate in phase, frequency, intensity, and then original acoustic wave is corrected in real time.
In the present embodiment, as shown in Figure 10, the acoustic signals are carried out according to the proportionate relationship in step S30 real
Shi Xiuzheng, including:
S320, second offset of the acoustic signals in intensity is determined according to the proportionate relationship;
S321, according to second offset acoustic signals are corrected in real time.
It can be seen from the aerial propagation law of sound, the size of acoustic energy and distance square are inversely proportional, thus
The size of the compensation of intensity can be calculated, and then original acoustic wave is corrected in real time.
Finally, treated sound waveform, exported by output interface to user, bring accurate acoustic image adaptation
Result of broadcast, realize according to the optical parametric of VR helmets to adjust the distance and relative angle of the broadcasting of sound producing body audio in real time
Degree, make user seen by person and be in same position with the sound producing body that ear is heard, acoustic image deviation will not be produced, so as to reach
Consumer's Experience is more excellent, feeling of immersion more preferably effect.
Embodiment two
Figure 11 is a kind of schematic diagram of the processing unit of sound waveform of the embodiment of the present invention, as shown in figure 11, this implementation
A kind of processing unit 0001 of sound waveform of example includes:
Acquisition module 1000, for gathering acoustic signals, obtain the source of sound angle-data of the acoustic signals;
Determining module 2000, for the optical coefficient of the optical mirror slip according to the virtual reality helmet, determine institute
The source of sound of acoustic signals is stated to the actual range and angle of specified object, determines the actual range and the source of sound to the finger
The proportionate relationship and angle of the default distance of earnest body;
Correcting module 3000, for being carried out according to the source of sound angle-data and the proportionate relationship to the acoustic signals
Amendment in real time.
The method of the present embodiment, according to the optical parametric of VR helmets come in real time adjust sound producing body audio broadcasting away from
From and relative angle, the sound producing body that user is seen by person and ear is heard is in same position, it is inclined acoustic image will not to be produced
Difference, feeling of immersion more preferably effect more excellent so as to reach Consumer's Experience.
Embodiment three
Figure 12 is a kind of schematic diagram of the processing unit of sound waveform of the embodiment of the present invention, as shown in figure 12, described to obtain
Modulus block 1000 includes:
Start recoding unit 1010, for starting the sound-recording function of mobile terminal, gather acoustic signals in real time;
Parameter acquisition unit 1020, for by the specified object using two relative angles as 180 ° of microphone group
Into microphone array gather between the frequency response of the acoustic signals, microphone intensity difference between time difference and microphone;
Unit 1030 is compareed, for compareing default source of sound angle-data storehouse, during according between the frequency response, microphone
Between difference and microphone between intensity difference obtain the source of sound angle-data of the acoustic signals.
In the present embodiment, can be accurately positioned out by the frequency response of the acoustic signals, ITD and IID current
Source of sound angle.
In the present embodiment, the generation method in the source of sound angle-data storehouse is:
Under noise elimination laboratory environment, using the midpoint of microphone that described two relative angles are 180 ° as the center of circle, same
On the upper each three-dimensional coordinate point of heart circle, [20,20000] Hz swept-frequency signal is played, by microphone array collection from each
Sound waveform on three-dimensional coordinate point;
Start the sound-recording function of mobile terminal, the sound on each three-dimensional coordinate point is gathered by the microphone array
Ripple signal;
The time between source of sound angle and frequency response, microphone is established by the acoustic signals on each three-dimensional coordinate point
Corresponding relation between difference and microphone between intensity difference, form source of sound angle-data storehouse.
In the present embodiment, the source of sound angle-data storehouse pre-establishes before mobile terminal dispatches from the factory, be have " can
Depending on sound field " necessary to the mobile terminal of function.
It gathers the scheme of sound wave as shown in figure 8, using level angle (Azimuth), the elevation angle (elevation) and front and rear
To represent source of sound angle-data;In the case where other two axles are constant, change an angle, the sound of the sound wave of different frequency merely
Should be different, along with ITD, IID, three conditions can be accurately positioned out current source of sound angle.According to different discriminations
Effect, different spacing accuracies can be used between the coordinate points of collection, it is however generally that, it is that precision can meet big absolutely with 1 ° to 5 °
The human ear of most users, which distinguishes, to be required.All after collection, source of sound angle-data storehouse is formed.
Sound source data storehouse preserves in a memory in the mobile terminal, the foundation original as music file is judged.Judging
When, ITD in all angles preserved in ITD, IID and frequency response by left and right acoustic channels, and source of sound angle-data storehouse,
IID and frequency response are compared, and find out the data pressed close to the most, that is, it is that the data are corresponding to represent current music playback angle
Angle, it be 0 degree of front such as to play angle, and the elevation angle is 0 degree.
Example IV
Figure 13 is a kind of schematic diagram of the processing unit of sound waveform of the embodiment of the present invention, as shown in figure 13, described to repair
Positive module 3000 includes:
Transmitting element 3010, for the acoustic signals and the source of sound angle-data to be sent to listening angle rotating mould
Block;
Call unit 3020, for calling default head related transfer function, the acoustic signals are obtained in phase, frequency
And the first offset in intensity;
First compensating unit 3030, for being corrected in real time to the acoustic signals according to first offset.
In the present embodiment, the HRTF is in the listening angle rotary module being preset in mobile terminal, listens to angle
Rotary module applies mechanically preset HRTF models, obtains the original after the angle-data and original acoustic wave signal that transmit is received
The size that beginning acoustic signals should compensate in phase and frequency, and then original acoustic wave signal is corrected in real time.
The sound wave that sound producing body after being adapted to according to optical parametric is sent is adjusted and compensated according to above-mentioned HRTF models,
Such as listen to angle and be arranged to 30 degree to the left of front, the elevation angle is 45 degree, after receiving above-mentioned setting, listens to angle rotary module and passes through
General HRTF models are inquired about, the guidance standard as sound wave compensation.
This HRTF model is characterized in, takes general mankind's head sizes, diffraction, reflection, the average value of Transfer Parameters, or pin
To the average value or typical value of target group (such as Asian), under noise elimination laboratory environment, using the standard of industry accreditation
Headform is emulated, the carry out adapting operation of the optical mirror slip parameter to VR helmets on mobile terminals is repeated, for broadcasting
The experiment that the audio put is adjusted in real time.The instrument of reception sound wave is simply become emulation headform, same collection
The waveform of the swept-frequency signal of each coordinate points of three dimensions.
Certainly, existing multiple large-scale experiment rooms carried out measurement to this partial data in acoustics educational circles, directly using these
The data of laboratory issue, it is also possible as HRTF models preset in mobile terminal.Angle rotary module is listened to obtain
The size that the original acoustic wave should compensate in phase, frequency, intensity, and then original acoustic wave is corrected in real time.
The correcting module 3000 can also include:
Determining unit 3040, for determining second compensation of the acoustic signals in intensity according to the proportionate relationship
Value;
Second compensating unit 3050, for being corrected in real time to the acoustic signals according to second offset.
It can be seen from the aerial propagation law of sound, the size of acoustic energy and distance square are inversely proportional, thus
The size of the compensation of intensity can be calculated, and then original acoustic wave is corrected in real time.
Finally, treated sound waveform, exported by output interface to user, bring accurate acoustic image adaptation
Result of broadcast, realize according to the optical parametric of VR helmets to adjust the distance and relative angle of the broadcasting of sound producing body audio in real time
Degree, make user seen by person and be in same position with the sound producing body that ear is heard, acoustic image deviation will not be produced, so as to reach
Consumer's Experience is more excellent, feeling of immersion more preferably effect.
Embodiment five
Figure 14 is the schematic diagram of the mobile terminal of the embodiment of the present invention, as shown in figure 14, the mobile terminal of the present embodiment
0002 includes:
For receiving acoustic signals recording module 00021, and the processing of any sound waveform described in above example
Device 0001.
Embodiment six
Figure 15 is the schematic diagram of the virtual reality helmet of the embodiment of the present invention, as shown in figure 15, the void of the present embodiment
Intending real helmet 0003 includes, and above-mentioned moves on to terminal 0002.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal equipment (can be mobile phone, computer, clothes
Be engaged in device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (8)
- A kind of 1. processing method of sound waveform, suitable for virtual reality helmet, it is characterised in that including:Acoustic signals are gathered, obtain the source of sound angle-data of the acoustic signals;According to the optical parametric of the optical mirror slip of the virtual reality helmet, determine the sources of sound of the acoustic signals to specifying The actual range of object, determine the proportionate relationship of the actual range and the source of sound to the default distance of the specified object;The acoustic signals are corrected in real time according to the source of sound angle-data and the proportionate relationship;Wherein, the collection acoustic signals, the source of sound angle-data of the acoustic signals is obtained, including:Start the sound-recording function of mobile terminal, gather acoustic signals in real time;The microphone array collection formed by being arranged in the specified object using two relative angles as 180 ° of microphone Intensity difference between time difference and microphone between the frequency responses of the acoustic signals, microphone;Default source of sound angle-data storehouse is compareed, according to intensity difference between time difference and microphone between the frequency response, microphone Obtain the source of sound angle-data of the acoustic signals;It is described that the acoustic signals are corrected in real time according to the source of sound angle-data, including:The acoustic signals and the source of sound angle-data are sent to listening angle rotary module;Default head related transfer function is called, obtains first offset of the acoustic signals in phase, frequency and intensity;The acoustic signals are corrected in real time according to first offset.
- 2. the processing method of sound waveform according to claim 1, it is characterised in that the life in the source of sound angle-data storehouse It is into method:Under noise elimination laboratory environment, using the midpoint of microphone that described two relative angles are 180 ° as the center of circle, in concentric circles On upper each three-dimensional coordinate point, [20,20000] Hz swept-frequency signal is played;Start the sound-recording function of mobile terminal, believed by sound wave of the microphone array collection on each three-dimensional coordinate point Number;By the acoustic signals on each three-dimensional coordinate point establish between source of sound angle and frequency response, microphone the time difference and Corresponding relation between microphone between intensity difference, form source of sound angle-data storehouse.
- 3. the processing method of sound waveform according to claim 1, it is characterised in thatIt is described that the acoustic signals are corrected in real time according to the proportionate relationship, including:Second offset of the acoustic signals in intensity is determined according to the proportionate relationship;The acoustic signals are corrected in real time according to second offset.
- A kind of 4. processing unit of sound waveform, it is characterised in that including:Acquisition module, for gathering acoustic signals, obtain the source of sound angle-data of the acoustic signals;Determining module, for the optical coefficient of the optical mirror slip according to virtual reality helmet, determine the acoustic signals Source of sound determines the actual range and the source of sound to the default distance of the specified object to the actual range of specified object Proportionate relationship;Correcting module, for being repaiied in real time to the acoustic signals according to the source of sound angle-data and the proportionate relationship Just;Wherein, the acquisition module includes:Start recoding unit, for starting the sound-recording function of mobile terminal, gather acoustic signals in real time;Parameter acquisition unit, for the Mike by being formed in the specified object using two relative angles as 180 ° of microphone Intensity difference between time difference and microphone between the frequency response of acoustic signals, microphone described in wind array acquisition;Unit is compareed, for compareing default source of sound angle-data storehouse, according to time difference and wheat between the frequency response, microphone Intensity difference obtains the source of sound angle-data of the acoustic signals between gram wind;The correcting module includes:Transmitting element, for the acoustic signals and the source of sound angle-data to be sent to listening angle rotary module;Call unit, for calling default head related transfer function, the acoustic signals are obtained in phase, frequency and intensity The first offset;First compensating unit, for being corrected in real time to the acoustic signals according to first offset.
- 5. the processing unit of sound waveform according to claim 4, it is characterised in that the life in the source of sound angle-data storehouse It is into method:Under noise elimination laboratory environment, using the midpoint of microphone that described two relative angles are 180 ° as the center of circle, in concentric circles On upper each three-dimensional coordinate point, [20,20000] Hz swept-frequency signal is played;Start the sound-recording function of mobile terminal, believed by sound wave of the microphone array collection on each three-dimensional coordinate point Number;By the acoustic signals on each three-dimensional coordinate point establish between source of sound angle and frequency response, microphone the time difference and Corresponding relation between microphone between intensity difference, form source of sound angle-data storehouse.
- 6. the processing unit of sound waveform according to claim 4, it is characterised in that the correcting module also includes:Determining unit, for determining second offset of the acoustic signals in intensity according to the proportionate relationship;Second compensating unit, for being corrected in real time to the acoustic signals according to second offset.
- 7. a kind of mobile terminal, including recording module, it is characterised in that including the sound described in claim 4-6 any one The processing unit of waveform.
- 8. a kind of virtual reality helmet, including optical mirror slip, it is characterised in that including the mobile end described in claim 7 End.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610379135.2A CN105916096B (en) | 2016-05-31 | 2016-05-31 | A kind of processing method of sound waveform, device, mobile terminal and VR helmets |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610379135.2A CN105916096B (en) | 2016-05-31 | 2016-05-31 | A kind of processing method of sound waveform, device, mobile terminal and VR helmets |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105916096A CN105916096A (en) | 2016-08-31 |
CN105916096B true CN105916096B (en) | 2018-01-09 |
Family
ID=56743006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610379135.2A Active CN105916096B (en) | 2016-05-31 | 2016-05-31 | A kind of processing method of sound waveform, device, mobile terminal and VR helmets |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105916096B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106528038B (en) * | 2016-10-25 | 2019-09-06 | 三星电子(中国)研发中心 | The method, system and device of audio frequency effect are adjusted under a kind of virtual reality scenario |
CN106604200A (en) * | 2017-01-19 | 2017-04-26 | 浪潮(苏州)金融技术服务有限公司 | Audio data processing method and apparatus |
CN107168518B (en) * | 2017-04-05 | 2020-06-23 | 北京小鸟看看科技有限公司 | Synchronization method and device for head-mounted display and head-mounted display |
KR101815507B1 (en) * | 2017-08-17 | 2018-01-05 | 한국콘베어공업주식회사 | Method for estimating an amount of wear for roller chain pin using friction sound |
CN107632703A (en) * | 2017-09-01 | 2018-01-26 | 广州励丰文化科技股份有限公司 | Mixed reality audio control method and service equipment based on binocular camera |
CN110324705A (en) * | 2018-03-30 | 2019-10-11 | 深圳市掌网科技股份有限公司 | It is a kind of based on wear display equipment image processing method and system |
CN109582273A (en) | 2018-11-26 | 2019-04-05 | 联想(北京)有限公司 | Audio-frequency inputting method, electronic equipment and audio output device |
CN109977799B (en) * | 2019-03-07 | 2022-12-20 | 腾讯科技(深圳)有限公司 | Method, device and terminal for monitoring distance between human eyes and screen |
CN113553022A (en) * | 2021-07-16 | 2021-10-26 | Oppo广东移动通信有限公司 | Equipment adjusting method and device, mobile terminal and storage medium |
CN114760560A (en) * | 2022-03-23 | 2022-07-15 | 歌尔股份有限公司 | Sound signal processing method, sound signal processing device, earphone equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105100413A (en) * | 2015-05-27 | 2015-11-25 | 努比亚技术有限公司 | Information processing method, device and terminal |
CN105187625A (en) * | 2015-07-13 | 2015-12-23 | 努比亚技术有限公司 | Electronic equipment and audio processing method |
CN105227743A (en) * | 2015-08-25 | 2016-01-06 | 努比亚技术有限公司 | A kind of method for recording, device and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NL2006997C2 (en) * | 2011-06-24 | 2013-01-02 | Bright Minds Holding B V | Method and device for processing sound data. |
-
2016
- 2016-05-31 CN CN201610379135.2A patent/CN105916096B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105100413A (en) * | 2015-05-27 | 2015-11-25 | 努比亚技术有限公司 | Information processing method, device and terminal |
CN105187625A (en) * | 2015-07-13 | 2015-12-23 | 努比亚技术有限公司 | Electronic equipment and audio processing method |
CN105227743A (en) * | 2015-08-25 | 2016-01-06 | 努比亚技术有限公司 | A kind of method for recording, device and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN105916096A (en) | 2016-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105916096B (en) | A kind of processing method of sound waveform, device, mobile terminal and VR helmets | |
CN104750420B (en) | Screenshotss method and device | |
CN104967726B (en) | Phonetic order treating method and apparatus, mobile terminal | |
CN105227743B (en) | A kind of method for recording, device and mobile terminal | |
CN105100482A (en) | Mobile terminal and system for realizing sign language identification, and conversation realization method of the mobile terminal | |
CN106131322A (en) | Dual-screen mobile terminal plays the method and device of sound | |
CN105357367B (en) | Recognition by pressing keys device and method based on pressure sensor | |
CN106772247A (en) | A kind of terminal and sound localization method | |
CN106569773A (en) | Terminal and voice interaction processing method | |
CN106791218A (en) | A kind of screen control terminal and method | |
CN107104970A (en) | A kind of regular and synchronized processing method, system, rule server and gray scale server | |
CN107016309A (en) | A kind of terminal and audio matching process | |
CN106778514A (en) | A kind of method and device for identifying object | |
CN106897018A (en) | Gesture operation method, device and mobile terminal | |
CN107295140A (en) | A kind of detection method of earphone, device and terminal | |
CN106131274A (en) | Mobile terminal control device and method | |
CN105096696A (en) | Sign language translation apparatus and method based on intelligent bracelet | |
CN106793159A (en) | A kind of screen prjection method and mobile terminal | |
CN104731508B (en) | Audio frequency playing method and device | |
CN108076231A (en) | A kind of movement exchange householder method and device | |
CN106851114A (en) | A kind of photo shows, photo generating means and method, terminal | |
CN106125898A (en) | The method and device of screen rotation | |
CN106160687A (en) | A kind of volume adjustment device and method, relevant device | |
CN107241497A (en) | Mobile terminal and loudness output adjusting method | |
CN107025158A (en) | A kind of slide method of testing, device and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |