CN106057207A - Remote stereo all-around real-time transmission and playing method - Google Patents

Remote stereo all-around real-time transmission and playing method Download PDF

Info

Publication number
CN106057207A
CN106057207A CN201610494569.7A CN201610494569A CN106057207A CN 106057207 A CN106057207 A CN 106057207A CN 201610494569 A CN201610494569 A CN 201610494569A CN 106057207 A CN106057207 A CN 106057207A
Authority
CN
China
Prior art keywords
transmission
user
time
processing unit
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610494569.7A
Other languages
Chinese (zh)
Other versions
CN106057207B (en
Inventor
党少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Virtual Reality Technology Co Ltd
Original Assignee
Shenzhen Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Virtual Reality Technology Co Ltd filed Critical Shenzhen Virtual Reality Technology Co Ltd
Priority to CN201610494569.7A priority Critical patent/CN106057207B/en
Publication of CN106057207A publication Critical patent/CN106057207A/en
Application granted granted Critical
Publication of CN106057207B publication Critical patent/CN106057207B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Stereophonic System (AREA)

Abstract

The invention provides a remote stereo all-around real-time transmission and playing method. The method includes a server, a transmission system, and a terminal, the terminal includes a processing unit, a movement detection unit, and an acoustic unit, the processing unit is electrically connected with the movement detection unit and the acoustic unit, and the movement detection unit includes a position detection device, an attitude detection device, a speed detection unit, and an angular velocity detection unit. Compared with the prior art, a scheme that the sound data is correspondingly transmitted based on the detection result of the movement detection unit is adopted, so that a lot of network bandwidth is saved, and the remote stereo all-around real-time transmission and playing can be achieved. By means of cube division, the sound selection is regionalized and quantified, so that the sound can be perfectly restored and the data transmission quantity can be saved.

Description

Long-range stereo comprehensive real-time Transmission and player method
Technical field
The present invention relates to stereo transmission field, more particularly, it relates to long-range stereo comprehensive real-time Transmission and broadcasting Put method.
Background technology
In existing virtual reality and augmented reality, played in stereo is the most unrelated with position.We in advance will Required stereo making is also stored in relevant device, releases in the lump time to be used, and such mode is easier, but seriously Have impact on the feeling of immersion of user.In the comprehensive play-back technology of existing partial stereo, have employed making and position and angle The omnibearing stereo sound that degree is relevant, is stored in relevant device, transfers the played in stereo of correspondence position and angle when deployed. This mode reduces scene well, has built stronger feeling of immersion, but this mode long-range stereo comprehensive in real time In transmission and be of little use.This is because the set of the sound of each position and all directions has the biggest data volume, real-time Transmission this A little data volumes can seriously occupy bandwidth, the space of extruding image transmitting, causes image transmitting card occur, and then affects virtual existing Entity is tested and feeling of immersion.
Summary of the invention
In order to solve the stereo defect that cannot realize comprehensive transmission and affect feeling of immersion of current remote, the present invention provides A kind of can comprehensive transmission and the strong long-range stereo comprehensive real-time Transmission of feeling of immersion and player method.
The technical solution adopted for the present invention to solve the technical problems is: provide long-range stereo comprehensive real-time Transmission and Player method, including server, transmission system and terminal, described terminal includes processing unit, motion detection unit and acoustics list Unit, described processing unit is electrically connected with described motion detection unit and described acoustic elements respectively, described motion detection unit Including position detecting device and Attitute detecting device, the method for long-range panoramic picture real-time Transmission and display comprises the following steps:
The kinestate of S1: described motion detection unit detection user, and testing result transmission is processed list to described Unit;
Space residing for user is divided into n cube and transmits cube information to described place by S2: described server Reason unit, described processing unit draws the cube residing for user according to the coordinate information that described motion detection unit provides Region and the cubical area that may arrive, composition transmission sound field;
S3: described server transport corresponding described transmission sound field data are to described terminal.
Preferably, the computational methods of described transmission sound field are:
User coordinate information (the X that motion detection unit described in processing unit record described in S2.11 provides0, Y0, Z0);
Processing unit described in S2.12 calculates forward maximum offset (the Δ X of user coordinate1, Δ Y1, Δ Z1) and anti- To maximum offset (Δ X2, Δ Y2, Δ Z2), arrange coordinate set Ф={ (X that user is likely to occur0-ΔX2<X<X0+Δ X1), (Y0-ΔY2<Y<Y0+ΔY1), (Z0-ΔZ2<Z<Z0+ΔZ1) | X, Y, Z, };
The cubical area that point corresponding for S2.13 set Ф occupies is described transmission sound field.
Preferably, the computational methods of described transmission sound field are:
Motion detection unit record user 20 described in S2.21 is v along the linear velocity of x-axis, y-axis and z-axis all directionsx、 vy、vz, note time delay is t10, described processing unit record user 20 is along the maximum acceleration of x-axis, y-axis and z-axis all directions Degree is ax、ay、az
Processing unit described in S2.22 calculates t time delay10The angle coordinate set Ф that interior user is likely to occur1= {(((x0-(vxt10+axt10 2/2))≤x≤(x0+(vxt10+axt10 2/ 2))), (((y0-(vyt10+ayt10 2/2))≤y≤(y0+ (vyt10+ayt10 2/ 2))), (((z0-(vzt10+azt10 2/2))≤z≤(z0+(vzt10+azt10 2/ 2))) | x, y, z}, gather Ф1 The cubical area that corresponding point occupies is described transmission sound field.
Preferably, described time delay is that user is detected the time point of kinestate to line number under server correspondence According to the time used by end of transmission.
Preferably, the computational methods making described transmission sound field are:
S2.31 records in time delay user along the maximum of x-axis, y-axis and z-axis all directions in described processing unit Linear velocity is vx’、vy’、vz’;
Set of computations Ф is processed described in S2.322={ ((x0-vx’t10)≤x≤(x0+vx’t10)), ((y0-vy’t10)≤y ≤(y0+vy’t10)), ((z0-vz’t10)≤z≤(z0+vz’t10)) | x, y, z};
S2.33 order set Ф3=(Ф1∩Ф2), then Ф3The cubical area that corresponding point occupies is described transmission sound Sound region.
Preferably, described server is both provided with m virtual sound source in each single cube, and described virtual sound source can To simulate sound source sounding, described processing unit, to the data of described server request m described virtual sound source, is integrated all described Sound that virtual sound source sends by CH1 and CH2 channel transfer to described acoustic elements.
Preferably, the user coordinate information that described processing unit provides according to described motion detection unit determines user Location and the direction faced, simulation virtual sound source 40 is respectively to the acoustic information of user both sides ear transmission.
Preferably, described terminal farther includes response test device, when described response test device can test response Between, described response time is that terminal to server issues a signal to terminal and receives the corresponding time used by return signal.
Preferably, described server is true according to the response time of described response test measurement device and the performance of described terminal The quantity of fixed described virtual sound source and position.
Preferably, described terminal is virtual implementing helmet or augmented reality glasses.
Compared with prior art, the present invention uses the side according to motion detection unit testing result correspondence transmission voice data Case, saves a lot of network bandwidths, makes long-range stereo comprehensive real-time Transmission and broadcasting be achieved.Divide cubical Mode makes sound choose to be localized and quantify, and is conducive to the most also original sound save volume of transmitted data.Send out by intercepting Territory, sound area and the mode of transmission sound field, not only reduce volume of transmitted data, and avoid the generation postponing sense.By really Determine user internal coordinate time delay maximum offset can the scope of more precise delivery sound field, reduce data further Transmission quantity.Fully reduce stereo by the setting of virtual sound source, further increase feeling of immersion.Response test dress is set Put the network speed situation that can measure user, select according to network condition and equipment performance the quantity of virtual sound source ensureing Improve tonequality in the case of sound normal transmission to greatest extent, the most conveniently draw t time delay10.By virtual sound source respectively Stereosonic sense of reality is further increased to user two ear transmission sound.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is current remote sound real-time Transmission and player method;
Fig. 2 is stereo comprehensive broadcasting schematic diagram;
Fig. 3 is the long-range stereo comprehensive real-time Transmission of the present invention and player method structural representation;
Fig. 4 is the long-range stereo comprehensive real-time Transmission of the present invention and player method principle schematic;
Fig. 5 is the long-range stereo comprehensive real-time Transmission of the present invention and player method cubic space schematic diagram.
Detailed description of the invention
In order to solve the stereo defect that cannot realize comprehensive transmission and affect feeling of immersion of current remote, the present invention provides A kind of can comprehensive transmission and the strong long-range stereo comprehensive real-time Transmission of feeling of immersion and player method.
In order to be more clearly understood from the technical characteristic of the present invention, purpose and effect, now comparison accompanying drawing describes in detail The detailed description of the invention of the present invention.
Refer to Fig. 1.Fig. 1 is current remote sound real-time Transmission and player method, and terminal 13 includes acoustic elements 133, Arrived according to the voice data that the data transmission that terminal 13 is asked is corresponding to server 11 request data, server 11 by terminal 13 Terminal 13, terminal 13 after treatment by transmission of sound signals to acoustic elements 133.
Refer to Fig. 2.Fig. 2 is stereo comprehensive broadcasting schematic diagram.Panoramic picture is many by virtual implementing helmet, enhancing Reality glasses and panorama Curved screen present, and need to ensure how user 20, no matter towards what direction, moves, can feel By the image as reality.Stereo, it is to cooperate with panoramic picture, makes the necessary factors that feeling of immersion is substantially improved.Existing In the comprehensive play-back technology of some partial stereo, have employed and make the omnibearing stereo sound relevant to position and angle, storage In relevant device, transfer the played in stereo of correspondence position and angle when deployed.But this mode long-range stereo entirely Orientation real-time Transmission encounters difficulty.This is because in long-range real-time Transmission, it is impossible to make comprehensive 360 degree vertical in advance Body sound, and the set of the sound of each position and all directions has the biggest data volume, these data volumes of real-time Transmission can seriously account for According to bandwidth, the space of extruding image transmitting, cause image transmitting that card occurs, and then affect virtual reality experience and feeling of immersion.
Refer to Fig. 3 Fig. 5, in the present invention, complete long-range stereo comprehensive real-time Transmission and broadcasting needs service Device 11, transmission system 12 and terminal 13, connected by transmission system 12 between server 11 and terminal 13 and mutually transmit information. Terminal 13 includes processing unit 137, acoustic elements 133, environmental simulation unit 139 and motion detection unit 135, processing unit 137 are electrically connected with acoustic elements 133, environmental simulation unit 139 and motion detection unit 135 respectively.Environmental simulation unit 139 Can according to the related command simulated environment scene of processing unit 137, server 11 can with transmission environment information to terminal 13, and Expressed by processing unit 137 command environment analogue unit 139.Environmental simulation unit 139 can be equipped with aerator, and (figure is not Show) simulation wind direction and be equipped with water injector (not shown) simulation water spray, rain etc. environmental change.Motion detection unit 135 includes Attitute detecting device 1353, speed detector 1355 and position detecting device 1357.Processing unit 137 includes that response test fills Putting 1371, response test device 1371 can be with Measurement Network response speed.
Space residing for user 20 is divided into n cube by server 11, and transmits cube metadata to processing unit 137, user 20 is contained therein in a cube, and our this cube is called sound-emanating areas 51.In sound-emanating areas 51, Server 11 arranges m virtual sound source 40.Processing unit 137 can obtain virtual by the way of to server 11 request data The acoustic information of sound source 40 and positional information.Each virtual sound source 40 unit 137 that can be processed for simulating sound source sounding, And integrated by processing unit 137, then by CH1 and CH2 channel transfer to acoustic elements 133.The sounding of virtual sound source 40 It is that the angle coordinate of the position coordinates according to user and user determines, by position coordinates and angle coordinate, processes Unit 137 can determine the distance between virtual sound source 40 and two ears of user and direction respectively, and according to correlation formula Simulate the sounding to two ears respectively.Its computing formula is:
Lp=Lw-K+DIm-Ae
Wherein, Lp is the acoustic pressure of people two ear, and Lw is the acoustic pressure of sound source, is equivalent to loudness.Distance r is exactly propagation distance, Parameter K comprises.
(1) radiation is for spherical wave, dissipates decay K:
K=10log (10,4 π)+20log (10, r)
Wherein r is virtual sound source 40 and the positional distance of single ear.
(2) directivity factor DIm: whether there is reflecting surface (ground individually considers) near sound source, or sound source is inherently Non-point sound source, increases a reflecting surface and i.e. increases 3dB.
(3) other additional attenuations Ae, in this formula, we ignore, and therefore, formula becomes:
Lp=Lw-10log (10,4 π)+20log (10, r)+DIm
By above-mentioned formula, simulation sound source 40 calculates the sounding to two ears respectively, and respectively to two ear transmission. Processing unit 137 is integrated the sound of all simulation sound sources 40 and transmits final integrated results to acoustic elements 133.
It is detected from kinestate and the positional information of user 20, passes to user to acoustic information, need one Individual transmission and the time cycle of process.The initial time making this time cycle is T0, then this time cycle needs: user 20 Kinestate to be detected and be delivered to time of processing unit 137 be detection time t1, processing unit 137 process time For t2, processing unit 137 transmit data to server 11 required time t3, server 11 process time t4, server 11 is to process Unit 137 transmits the time t of corresponding voice data5。(t1+t2+t3+t4+t5) we are referred to as time delay during this period of time, note postpones Time is t10.Time delay t10Difference according to server 11, the performance of terminal 13 and network transfer speeds is had nothing in common with each other. Can be easy to draw, the response time that response test device 1371 is measured is (t3+t4+t5), due to t1And t2For specifically For terminal be substantially changeless, therefore t time delay10Just can pass through response time (t3+t4+t5) and fixing Time t1、t2Calculate.
Due to the movement limit of people, in reality, people's short time is limited by the cube number moved across, we Claim people at t time delay10The interior cubical area that be may pass through by motion is transmission sound field 50.The present invention is the most three-dimensional The comprehensive real-time Transmission of sound and the transmission of player method and playing process be: at T0Moment, the angle coordinate information of user 20 Detect, through t with location coordinate information passive movement detector unit 1351This information of time be delivered to processing unit 137, place Reason unit 137 is through t2The process of time is to the data of server 11 request transmission sound field 50, through t3Time data is asked Information is transferred to server 11, and server 11 is through t4Transmission sound field 50 corresponding data is downlinked to by the process of time Terminal 13, through t5Time incoming terminal and finish receiving, be now designated as T1Moment.Meanwhile, motion detection unit 135 detects User 20 is at T1The angle coordinate information in moment and location coordinate information, and by the transmission of this information to processing unit 137, process Unit 137 after treatment, captures corresponding T in the transmission sound field 50 passed back1Moment angle coordinate information and position are sat The sound-emanating areas 51 of mark information is also transmitted to acoustic elements 133.
In the present invention, the scope of transmission sound field 50 is important.If transmission sound field 50 is too small, use The person 20 motion within buffer time is likely to result in the sound-emanating areas 51 scope beyond transmission sound field 50, causes and cannot play Sound;If transmission sound field 50 is excessive, increasing of transmitted data on network amount can be caused, have in the case of unstable networks Vision bandwidth and feeling of immersion may be affected.
First embodiment of the invention uses the mode calculating user 20 maximum coordinates side-play amount to determine transmission sound field 50.We set up virtual rectangular coordinate system, and user 20 along the angle coordinate of x-axis, y-axis and z-axis is now, and position coordinates is (X0, Y0, Z0), therefore user 20 coordinate information now is (X0, Y0, Z0).Processing unit 137 calculates by the way of calculating Go out t time delay10Forward maximum offset (the Δ X of interior user 20 angle coordinate1, Δ Y1, Δ Z1) and reverse maximum offset (ΔX2, Δ Y2, Δ Z2), the coordinate set that arrangement user 20 is likely to occur:
Ф={ (X0-ΔX2<X<X0+ΔX1), (Y0-ΔY2<Y<Y0+ΔY1), (Z0-ΔZ2<Z<Z0+ΔZ1) | X, Y, Z};
In set Ф, the cubical set at all coordinate places is transmission sound field 50.Corresponding coordinate can be by Motion detection unit 135 detection obtains.
The maximum offset of coordinate has a lot of algorithm, and one of which is exactly to utilize the largest motion angular acceleration of user 20 Calculate with linear acceleration.Here, we make the largest motion linear acceleration of user 20 be a, then user 20 is along x-axis, y-axis It is a with z-axis all directions max line accelerationx、ay、az, linear velocity that user 20 moves along x-axis, y-axis and z-axis all directions For vx、vy、vz, vx、vy、vzMotion detection unit 135 detection can be utilized to obtain, at t time delay10Interior user is along x-axis, y Axle and z-axis all directions maximum displacement are (vxt10+axt10 2/2)、(vyt10+ayt10 2/2、(vzt10+azt10 2/ 2), the change of coordinate Change scope is:
{(((x0-(vxt10+axt10 2/2))≤x≤(x0+(vxt10+axt10 2/ 2))), (((y0-(vyt10+ayt10 2/2))≤ y≤(y0+(vyt10+ayt10 2/ 2))), (((z0-(vzt10+azt10 2/2))≤z≤(z0+(vzt10+azt10 2/2)))}
Now, the collection of coordinate is combined into:
Ф1={ (((x0-(vxt10+axt10 2/2))≤x≤(x0+(vxt10+axt10 2/ 2))), (((y0-(vyt10+ayt10 2/ 2))≤y≤(y0+(vyt10+ayt10 2/ 2))), (((z0-(vzt10+azt10 2/2))≤z≤(z0+(vzt10+azt10 2/ 2))) | x, Y, z, }.
The set utilizing the method coordinates computed can largely reduce transmission sound field 50, preferably saves Resource.
Second embodiment of the invention saves transmission sound field 50 on the basis of first embodiment further.Owing to people exists Motor process exists limit velocity, accelerated motion would not be continued after motion reaches capacity speed.Therefore, we make use The maximum movement speed of person 20 is v, then user 20 is v along the maximal rate of x-axis, y-axis and z-axis all directionsx’、vy’、vz', At t time delay10The maximal rate that interior user 20 moves along x-axis, y-axis and z-axis all directions is: vx’t10、vy’t10、vz’ t10, the excursion of coordinate is:
{((x0-vx’t10)≤x≤(x0+vx’t10)), ((y0-vy’t10)≤y≤(y0+vy’t10)), ((z0-vz’t10)≤z ≤(z0+vz’t10)),
Now, the collection of coordinate is combined into:
Ф2={ ((x0-vx’t10)≤x≤(x0+vx’t10)), ((y0-vy’t10)≤y≤(y0+vy’t10)), ((z0-vz’ t10)≤z≤(z0+vz’t10)) | x, y, z}.
User 20 accelerates rotation head in any case, all without exceeding set Ф2Coordinate range, we make set Ф3 =(Ф1∩Ф2), then Ф3It is the angle coordinate set that user is likely to occur.This result makes transmission sound field 50 enter One step reduces, and decreases substantial amounts of transmission data.
In theory, the quantity of virtual sound source 40 is the most, more can go back original sound truly.But the limit due to equipment performance System and the restriction of the network bandwidth, we can not infinitely increase the quantity of virtual sound source 40.In server 11, according to different Network condition and equipment performance store the scheme of the virtual sound source 40 of multiple quantity, and server 11 is sentenced according to the model of terminal 13 Its performance disconnected, and the response time of comprehensive response test device 1371 test determines quantity and the position of virtual sound source 40, and to Processing unit 137 transmits virtual sound source 40 data of respective amount.
Compared with prior art, the present invention uses and transmits voice data according to motion detection unit 135 testing result correspondence Scheme, save a lot of network bandwidths, make long-range stereo comprehensive real-time Transmission and broadcasting be achieved.Division cube The mode of body makes sound choose to be localized and quantify, and is conducive to the most also original sound save volume of transmitted data.By cutting Take sound-emanating areas 51 and the mode of transmission sound field 50, not only reduce volume of transmitted data, and avoid the product postponing sense Raw.Be determined by user 20 internal coordinate time delay maximum offset can the scope of more precise delivery sound field 50, Reduce volume of transmitted data further.Fully reduce stereo by the setting of virtual sound source 40, further increase and immerse Sense.Response test device 1371 is set and can measure the network speed situation of user 20, select according to network condition and equipment performance The quantity of virtual sound source 40 can improve tonequality in the case of ensureing sound normal transmission to greatest extent, the most conveniently draws and prolongs Time t late10.Stereosonic sense of reality is further increased to 20 liang of ear transmission sound of user respectively by virtual sound source 40.
Above in conjunction with accompanying drawing, embodiments of the invention are described, but the invention is not limited in above-mentioned concrete Embodiment, above-mentioned detailed description of the invention is only schematic rather than restrictive, those of ordinary skill in the art Under the enlightenment of the present invention, in the case of without departing from present inventive concept and scope of the claimed protection, it may also be made that a lot Form, within these belong to the protection of the present invention.

Claims (10)

  1. The most long-range stereo comprehensive real-time Transmission and player method, it is characterised in that include server, transmission system and end End, described terminal includes processing unit, motion detection unit and acoustic elements, and described processing unit detects with described motion respectively Unit and described acoustic elements are electrically connected with, and described motion detection unit includes position detecting device and Attitute detecting device, far The method of journey panoramic picture real-time Transmission and display comprises the following steps:
    The kinestate of S1: described motion detection unit detection user, and by testing result transmission to described processing unit;
    Space residing for user is divided into n cube and transmits cube information to described process list by S2: described server Unit, described processing unit draws the cubical area residing for user according to the coordinate information that described motion detection unit provides And the cubical area that may arrive, composition transmission sound field;
    S3: described server transport corresponding described transmission sound field data are to described terminal.
  2. Long-range panorama phonotape and videotape real-time Transmission the most according to claim 1 and the method for broadcasting, it is characterised in that described transmission The computational methods of sound field are:
    User coordinate information (the X that motion detection unit described in processing unit record described in S2.11 provides0, Y0, Z0);
    Processing unit described in S2.12 calculates forward maximum offset (the Δ X of user coordinate1, Δ Y1, Δ Z1) and Big side-play amount (Δ X2, Δ Y2, Δ Z2), arrange coordinate set Ф={ (X that user is likely to occur0-ΔX2<X<X0+ΔX1), (Y0-ΔY2<Y<Y0+ΔY1), (Z0-ΔZ2<Z<Z0+ΔZ1) | X, Y, Z, };
    The cubical area that point corresponding for S2.13 set Ф occupies is described transmission sound field.
  3. Long-range stereo comprehensive real-time Transmission the most according to claim 2 and player method, it is characterised in that described fortune Dynamic detector unit farther includes speed detector, and the computational methods of described transmission sound field are:
    Described in S2.21, speed detector detection user 20 is along linear velocity v of x-axis, y-axis and z-axis all directionsx、vy、vz, note Time delay is t10, described processing unit record user 20 is a along the peak acceleration of x-axis, y-axis and z-axis all directionsx、 ay、az
    Processing unit described in S2.22 calculates t time delay10The angle coordinate set Ф that interior user is likely to occur1= {(((x0-(vxt10+axt10 2/2))≤x≤(x0+(vxt10+axt10 2/ 2))), (((y0-(vyt10+ayt10 2/2))≤y≤(y0+ (vyt10+ayt10 2/ 2))), (((z0-(vzt10+azt10 2/2))≤z≤(z0+(vzt10+azt10 2/ 2))) | x, y, z}, gather Ф1 The cubical area that corresponding point occupies is described transmission sound field.
  4. Long-range stereo comprehensive real-time Transmission the most according to claim 3 and player method, it is characterised in that described in prolong Time is that user is detected the time point of kinestate to the server correspondence downlink data transmission complete time used late.
  5. Long-range panorama phonotape and videotape real-time Transmission the most according to claim 3 and player method, it is characterised in that make described transmission The computational methods of sound field are:
    S2.31 records in time delay user along the maximum line speeds of x-axis, y-axis and z-axis all directions in described processing unit Degree is vx’、vy’、vz’;
    Set of computations Ф is processed described in S2.322={ ((x0-vx’t10)≤x≤(x0+vx’t10)), ((y0-vy’t10)≤y≤(y0+ vy’t10)), ((z0-vz’t10)≤z≤(z0+vz’t10)) | x, y, z};
    S2.33 order set Ф3=(Ф1∩Ф2), then Ф3The cubical area that corresponding point occupies is described transmission sound district Territory.
  6. Long-range stereo comprehensive real-time Transmission the most according to claim 1 and player method, it is characterised in that described clothes Business device is both provided with m virtual sound source in each single cube, and described virtual sound source can simulate sound source sounding, described place Reason unit to the data of described server request m described virtual sound source, integrates sound that all described virtual sound sources send also By CH1 and CH2 channel transfer to described acoustic elements.
  7. Long-range stereo comprehensive real-time Transmission the most according to claim 6 and player method, it is characterised in that described place Reason unit determines user location and the side faced according to the user coordinate information that described motion detection unit provides To, simulation virtual sound source 40 is respectively to the acoustic information of user both sides ear transmission.
  8. Long-range stereo comprehensive real-time Transmission the most according to claim 6 and player method, it is characterised in that described end End farther includes response test device, and described response test device can test response time, and described response time is terminal Issue a signal to terminal to server and receive the time used by corresponding return signal.
  9. Long-range stereo comprehensive real-time Transmission the most according to claim 8 and player method, it is characterised in that described clothes Business device determines the quantity of described virtual sound source according to the response time of described response test measurement device and the performance of described terminal And position.
  10. 10. according to the long-range stereo comprehensive real-time Transmission described in any one of claim 19 and player method, its feature Being, described terminal is virtual implementing helmet or augmented reality glasses.
CN201610494569.7A 2016-06-30 2016-06-30 Remote stereo omnibearing real-time transmission and playing method Expired - Fee Related CN106057207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610494569.7A CN106057207B (en) 2016-06-30 2016-06-30 Remote stereo omnibearing real-time transmission and playing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610494569.7A CN106057207B (en) 2016-06-30 2016-06-30 Remote stereo omnibearing real-time transmission and playing method

Publications (2)

Publication Number Publication Date
CN106057207A true CN106057207A (en) 2016-10-26
CN106057207B CN106057207B (en) 2021-02-23

Family

ID=57166267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610494569.7A Expired - Fee Related CN106057207B (en) 2016-06-30 2016-06-30 Remote stereo omnibearing real-time transmission and playing method

Country Status (1)

Country Link
CN (1) CN106057207B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110651248A (en) * 2017-05-09 2020-01-03 微软技术许可有限责任公司 Spatial audio for three-dimensional data sets

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167131A1 (en) * 2007-01-05 2008-07-10 Stmicroelectronics S.R.L. Interactive entertainment electronic system
CN103218198A (en) * 2011-08-12 2013-07-24 索尼电脑娱乐公司 Sound localization for user in motion
GB2508830A (en) * 2012-12-11 2014-06-18 Holition Ltd Augmented reality system for trying out virtual clothing with associated sound
CN104871558A (en) * 2012-11-28 2015-08-26 高通股份有限公司 Image generation for collaborative sound systems
CN105451152A (en) * 2015-11-02 2016-03-30 上海交通大学 Hearer-position-tracking-based real-time sound field reconstruction system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167131A1 (en) * 2007-01-05 2008-07-10 Stmicroelectronics S.R.L. Interactive entertainment electronic system
CN103218198A (en) * 2011-08-12 2013-07-24 索尼电脑娱乐公司 Sound localization for user in motion
CN104871558A (en) * 2012-11-28 2015-08-26 高通股份有限公司 Image generation for collaborative sound systems
GB2508830A (en) * 2012-12-11 2014-06-18 Holition Ltd Augmented reality system for trying out virtual clothing with associated sound
CN105451152A (en) * 2015-11-02 2016-03-30 上海交通大学 Hearer-position-tracking-based real-time sound field reconstruction system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110651248A (en) * 2017-05-09 2020-01-03 微软技术许可有限责任公司 Spatial audio for three-dimensional data sets

Also Published As

Publication number Publication date
CN106057207B (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN111107911B (en) competition simulation
US20210233304A1 (en) Systems and associated methods for creating a viewing experience
CN103257840B (en) Method for simulating audio source
US20180350136A1 (en) Systems and associated methods for creating a viewing experience
WO2020237611A1 (en) Image processing method and apparatus, control terminal and mobile device
CN106774830B (en) Virtual reality system, voice interaction method and device
WO2022095537A1 (en) Virtual object display method and apparatus, and storage medium and electronic device
CN107656718A (en) A kind of audio signal direction propagation method, apparatus, terminal and storage medium
KR101801120B1 (en) Method and apparatus for multi-camera motion capture enhancement using proximity sensors
CN104503092A (en) Three-dimensional display method and three-dimensional display device adaptive to different angles and distances
US20190306651A1 (en) Audio Content Modification for Playback Audio
GB2536020A (en) System and method of virtual reality feedback
CN106210692A (en) Long-range panoramic picture real-time Transmission based on pupil detection and display packing
CN106057207A (en) Remote stereo all-around real-time transmission and playing method
CN106534968A (en) Method and system for playing 3D video in VR device
CN111243070B (en) Virtual reality presenting method, system and device based on 5G communication
TWI720463B (en) Audio modification system and method thereof
CN111401283A (en) Face recognition method and device, electronic equipment and storage medium
CN203825856U (en) Power distribution simulation training system
CN112233146B (en) Position recommendation method and device, computer readable storage medium and electronic equipment
KR20200052693A (en) Virtual reality player and integrated management system for monitoring thereof
CN106170082A (en) The remotely real-time comprehensive transmission of panoramic picture and display packing
EP3672703A1 (en) Collision avoidance for wearable apparatuses
CN114733189A (en) Control method, device, medium and electronic equipment for somatosensory ball hitting
US11771954B2 (en) Method for calculating a swing trajectory of a golf club using radar sensing data, a radar sensing device using the same, and a recording medium readable by a computing device recording the method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210223