CN106937204B - Panorama multichannel sound effect method for controlling trajectory - Google Patents

Panorama multichannel sound effect method for controlling trajectory Download PDF

Info

Publication number
CN106937204B
CN106937204B CN201511028156.1A CN201511028156A CN106937204B CN 106937204 B CN106937204 B CN 106937204B CN 201511028156 A CN201511028156 A CN 201511028156A CN 106937204 B CN106937204 B CN 106937204B
Authority
CN
China
Prior art keywords
acoustic image
speaker
track
audio
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511028156.1A
Other languages
Chinese (zh)
Other versions
CN106937204A (en
Inventor
肖建敏
翁世峰
冯嘉明
陈智杰
何飞
白连东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Li Feng Creative Exhibition Co Ltd
Original Assignee
Shanghai Li Feng Creative Exhibition Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Li Feng Creative Exhibition Co Ltd filed Critical Shanghai Li Feng Creative Exhibition Co Ltd
Priority to CN201511028156.1A priority Critical patent/CN106937204B/en
Publication of CN106937204A publication Critical patent/CN106937204A/en
Application granted granted Critical
Publication of CN106937204B publication Critical patent/CN106937204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The present invention relates to Audiotechnica fields, specifically a kind of panorama multichannel sound effect method for controlling trajectory, the control method is configured by output level value of the console to each speaker node of entity sound box system, move acoustic image in the way of setting in the total duration of setting or static, it is characterized in that, the control method includes: to generate acoustic image track data, and the acoustic image track data includes variable domain acoustic image track data;In the total duration corresponding to the acoustic image track data, according to acoustic image track data, the output level of each speaker entity is adjusted;In the total duration, the output level of the incoming level for being input to each speaker physical signal and corresponding speaker entity is overlapped to obtain the level of each speaker entity reality output.The present invention can obtain more abundant, changeable acoustic image trail change, have greatly expanded the artistic representing effect of panorama multi-channel audio system.

Description

Panorama multichannel sound effect method for controlling trajectory
Technical field
The present invention relates to Audiotechnica field, specifically a kind of panorama multichannel sound effect method for controlling trajectory.
Background technique
In items on the program layout process, a comparison distinct issues are that each profession (refers to audio, video, light, machine Tool etc.) between coordination and synchronously control.In large-scale performance, each profession is relatively independent, need one it is huger Troop could ensure the smooth layout and performance of performance.And during each professional program of layout, the most of the time is all spent Coordination between profession with it is synchronous above, and compared with being really absorbed in may be much more in the time of program itself.
Since each profession is relatively independent, control mode differs greatly.To carry out live audio-visual synchronization editor, depending on Frequency is controlled by lamp control platform, and audio plays back editor control by more rails, and audio is easy to navigate to the arbitrary time and starts back It puts, but frame number (can only be adjusted to corresponding position manually by operator, but cannot followed the time by video from the beginning Code starting), this is short of enough flexibilities for live performance control.
In addition, passing through stage two sides after the speaker position of the Professional sound box system of existing video display, stage is fixed Left and right channel is main to amplify or acoustic image is substantially set in the central location of stage by main amplify in three road of left, center, right, although performance field Other than the master on stage amplifies, be also provided with large number of speaker at various locations, but whole field is performed, speaker The acoustic image of system almost can seldom change.
Therefore, the run flexible control of effect of the editor and synchronously control and acoustic image for solving current items on the program is all this Technical field key technical problem urgently to be resolved.
Summary of the invention
The technical problem to be solved by the present invention is to provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and The multi-specialized collaborative editing being flexibly quickly arranged and control method can be carried out to the acoustic image of sound reinforcement system.
In order to solve the above technical problems, the technical solution adopted by the present invention is that: a kind of panorama multichannel sound effect TRAJECTORY CONTROL Method, the control method are configured by output level value of the console to each speaker node of entity sound box system, make acoustic image It is moved in the way of setting in the total duration of setting or static, which is characterized in that the control method includes: to generate acoustic image track Data, the acoustic image track data include variable domain acoustic image track data;In the total duration corresponding to the acoustic image track data, root According to acoustic image track data, the output level of each speaker entity is adjusted;In the total duration, each speaker entity letter will be input to Number incoming level and the output level of corresponding speaker entity be overlapped to obtain the level of each speaker entity reality output.
Detailed description of the invention
Fig. 1 is the multi-specialized collaborative editing and control method schematic diagram of embodiment.
Fig. 2 is the method schematic diagram of the multi-specialized collaborative editing of embodiment and the audio frequency control part of control method.
Fig. 3 is the operating method schematic diagram of the multi-specialized collaborative editing of embodiment and the audio sub-track of control method.
Fig. 4 is the method schematic diagram of the multi-specialized collaborative editing of embodiment and the video control portions of control method.
Fig. 5 is the method schematic diagram of the multi-specialized collaborative editing of embodiment and the signal light control part of control method.
Fig. 6 is the method schematic diagram of the multi-specialized collaborative editing of embodiment and the apparatus control portion point of control method.
Fig. 7 is the multi-specialized collaborative editing of embodiment and the schematic illustration for controlling control system.
Fig. 8 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the audio frequency control module of control.
Fig. 9 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the video control module of control.
Figure 10 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the lighting control module of control.
Figure 11 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the device control module of control.
Figure 12 is the multi-specialized collaborative editing of embodiment and more rails playback editor module interface schematic diagram of control method.
Figure 13 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the audio frequency control part of control system.
Figure 14 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the track matrix module of control system.
Figure 15 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the video control portions of control system.
Figure 16 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the signal light control part of control system.
Figure 17 is the schematic illustration of the multi-specialized collaborative editing of embodiment and the apparatus control portion point of control system.
Figure 18 is the step schematic diagram of the acoustic image method for controlling trajectory of embodiment.
Figure 19 is the acoustic image track data generation method step schematic diagram of embodiment.
Figure 20 is one of speaker distribution map and variable domain acoustic image track schematic diagram of embodiment.
Figure 21 be embodiment speaker distribution map and variable domain acoustic image track schematic diagram two.
Specific embodiment
The present embodiment provides one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and can be to sound reinforcement system Acoustic image carry out flexibly be quickly arranged towards video display, the multi-specialized collaborative editing of stage and control method.This method passes through collection Editor module is played back at more rails of control platform, realizes concentration layout and control to multiple professional materials.
As shown in Figure 1, it is above-mentioned towards video display, the multi-specialized collaborative editing of stage and control method the following steps are included:
S101: time shaft is shown on the display interface of integrated control platform;
S102: the track of addition and/or deletion for being controlled corresponding performing device, the track includes audio track One of road, track of video, light track, device track are a variety of;
S103: editing rail attribute;
S104: addition material;
S105: editor's material attribute;
S106: integrated control platform issues corresponding control instruction according to each track attribute and its material attribute.
As shown in Fig. 2 and Figure 12, the multi-specialized collaborative editing and control method may be selected to include more rail audio playback controls (corresponding with following audio frequency control modules), at this point, method includes the following steps:
S201: addition audio track, addition is parallel in the display interface and is aligned in the one or more of the time shaft Audio track (region) 1,2, each corresponding output channel of the audio track.
S202: editor's audio track attribute, editable audio track attribute include that lock on track, track are mute.Track Whether mute attribute can control audio material on this track and all sub-tracks mute, be the master control of audio track.Track locks Determine attribute can control on track except it is mute and add hide sub-track in addition to etc. outside individual attributes, in other attributes and audio track Material position and material attribute cannot be modified.
S203: addition audio material, added in audio track 1,2 one or more audio material 111,112,113, 211,212,213,214, and audio material corresponding with the audio material is generated in audio track, occupied by the audio material The length of audio track match with the total duration of the audio material.Before adding audio material, first from audio server Audio material list is obtained, audio material addition is then selected from the audio material list again and enters audio track.Work as audio After material is added to audio track, audio attribute file corresponding with the audio material will be generated, integrated control platform passes through editor Audio attribute file is sent to the instruction of audio server to control, rather than calls directly or edit the corresponding source of audio material File, it is ensured that the stability of the safety of source file and integrated control platform.
S204: editor audio material attribute, the audio material attribute include start position, final position, the time started, End time, total duration, play time length.Wherein, the start position is the audio material start position (along Vertical Square To) corresponding to the time shaft moment, the final position be the audio material final position (vertically) corresponding to when Between the axis moment, the time started is the practical beginning playing time of the audio material on a timeline, and the end time is The physical end play position of the audio material on a timeline.In general, the time started can be delayed in start position, terminate Time can shift to an earlier date in final position.Total duration refers to the script time span of audio material, start position to terminal position when Between difference be audio material total duration, play time length refers to the play time length of the audio material on a timeline, The time difference of time started and end time are the play time length of the audio material.By adjusting time started and end The shearing manipulation to acoustic image material may be implemented in time, i.e., only plays user and wish the part played.
It can change start position and terminal position by adjusting position of (transverse shifting) audio material in audio track It sets, but start position will not change with the relative position of final position on a timeline, i.e., the length of audio material will not change. At the beginning of by adjusting audio material and the end time can change the actual play time of audio material on a timeline And its length.It can place multiple audio materials in one audio track, indicate in the period represented by time shaft, it can be with (through corresponding output channel) plays in order multiple audio materials.It should be noted that the audio material in any audio track Position (time location) can freely adjust, but should not be overlapped between each audio material.
Further, since integrated control platform only controls the corresponding property file of audio material, control is integrated Platform can also carry out shearing operation and concatenation to audio material.Shearing operation refers to an audio element on audio track Material is divided into multiple audio materials, while each audio material after segmentation has corresponding property file, at this time source file Still intact, integrated control platform issues control command according to these new property files and source file is successively called to be played accordingly It is operated with audio.Similar, concatenation, which refers to, is merged into an audio material, corresponding category for two audio materials Property file mergences be a property file, pass through property file and issue control audio server and call two audio source documents Part.
Further, multiple groups application entity behaviour corresponding with each audio track respectively can also be set on integrated control platform Make key, to manually adjust the attribute of audio material by physical operation key.Such as increase to the audio material position in audio track Set the material play time adjustment knob adjusted before and after (time shaft position).
S205: addition audio sub-track 12,13,14,15,21,22 adds and wherein one audio track corresponding one A or multiple audio sub-tracks, each audio sub-track are parallel to the time shaft, and the audio sub-track is corresponding The output channel of the audio track is corresponding.
Each audio track can have an attached audio sub-track, the type of audio sub-track include acoustic image sub-track and Audio sub-track.Wherein, the acoustic image sub-track is used to carry out acoustic image to some or all of affiliated audio track audio material Trajectory processing, the audio sub-track are used to carry out audio effect processing to some or all of affiliated audio track audio material.? In this step, it can further perform the step of:
S301: addition acoustic image sub-track harmony pixel material adds one or more acoustic image materials in acoustic image sub-track 121,122, and acoustic image material corresponding with the acoustic image material is generated in the acoustic image sub-track, sound occupied by the acoustic image material As the length of sub-track matches with total duration corresponding to the acoustic image material.
S302: editor's acoustic image sub-track attribute, similar with the audio track, editable acoustic image sub-track attribute includes Lock on track, track are mute.
S303: editor's acoustic image material attribute, similar with the audio material, the acoustic image material attribute has also included point It sets, final position, time started, end time, total duration, play time length.
By the acoustic image material on acoustic image sub-track, can between the acoustic image material starting and end time when Between in section, acoustic image trajectory processing is carried out to the signal of the output of output channel corresponding to the affiliated audio track of acoustic image sub-track.Cause This adds different types of acoustic image material on acoustic image sub-track, can carry out inhomogeneity to the signal that corresponding output channel exports The acoustic image trajectory processing of type;And the start position by adjusting each acoustic image material, the end time, the time started and at the end of Between, time and acoustic image path effect duration that adjustable acoustic image trajectory processing starts.
The difference of acoustic image material and audio material is that audio material represented is audio data.Acoustic image track data is Within the period of setting length, in order to make each virtual speaker node output level in speaker distribution map be formed by acoustic image edge Preset path is run or is remained stationary, the output level data that each speaker node changes over time.That is acoustic image track number According to containing in speaker distribution map output level delta data of whole speaker nodes in the setting length of time section.Acoustic image The type of track data includes fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image track data, acoustic image rail The type of mark data determines that the type of acoustic image material, the movement of acoustic image corresponding to acoustic image track data total duration determine acoustic image Time difference between material start position and final position, the i.e. total duration of acoustic image material.Acoustic image trajectory processing refers to according to sound As the size of track data pair each speaker entity reality output level corresponding with each speaker node is adjusted, make speaker The acoustic image of physical system was run or is remained stationary along setting path within the period of setting length.
S304: addition audio sub-track, the type of the audio sub-track includes volume and gain sub-track 13,22, EQ Sub-track 14,15,21, the settable volume of each audio track and gain sub-track, and one or more EQ sub-track. Wherein, the volume and gain sub-track are for adjusting the signal level size of the corresponding output channel of affiliated audio track Whole, the signal that the EQ sub-track is used for the output to the corresponding output channel of affiliated audio track carries out EQ audio effect processing.
S305: the attribute of editor's audio sub-track, the attribute of the audio sub-track include lock on track, track it is mute and It further include audio effect processing parameter corresponding with audio sub-track type outside track identities.For example, volume and gain sub-track include Audio effect processing parameter be output level size adjustment parameter, the audio effect processing parameter that EQ sub-track includes be EQ processing parameter. Adjustable affiliated audio track of audio sub-track of sound effect parameters by modifying audio sub-track corresponds to the sound of output channel Effect.
S206: saving data, or according to audio track and its attribute of sub-track, audio material harmony pixel material attribute is raw Pairs of audio material corresponds to the control instruction of source file, and plays out control according to source file of the control instruction to audio material System and acoustic image, audio effect processing control.
Control instruction includes deciding whether to call the audio source file of (broadcasting) audio material, at the beginning of source file plays Between and end time (be subject to time shaft at the time of), the acoustic image and audio effect processing of source file, specific control instruction and each sound The attribute of frequency track and its attached sub-track, audio material, the attribute of acoustic image material are corresponding.That is audio track is not straight It connects calling and handles the source file of audio material, and only handle property file corresponding to the audio source file, pass through editor The attribute of property file, addition/editor's acoustic image material and the audio track and its sub-track that adjust source file is realized to audio Source file indirectly controls.
Such as be added to audio track audio material will enter playlist, the audio track starting play when, the sound Frequency material will be played;By editing audio track attribute, it can control the mute attribute of audio track, can control the sound Whether frequency track and its attached sub-track mute (effective), lock attribute by editor's audio, can control on track except mute and Outside the hiding sub-track of addition etc. outside individual attributes, material position and material attribute in other attributes and audio track cannot be repaired Change (lock state).More detailed description can refer to the narration of front.
As shown in Fig. 4 and Figure 12, the multi-specialized collaborative editing and control method of the present embodiment are also an option that increase video Playback controls (corresponding with following video control modules), specifically includes the following steps:
S401: addition track of video, (in the display interface) addition is parallel and is aligned in the track of video 4 of the time shaft (region), the track of video correspond to a controlled plant, use video server in the present invention.
S402: editor's track of video attribute, editable track of video attribute include that lock on track, track are mute.Video Track attribute is similar with audio track attribute.
S403: addition video material, the one or more video materials 41,42,43,44 of addition in track of video, and Generate corresponding with video material video material in track of video, the length of track of video occupied by the video material and this The total duration of video material matches.Before adding video material, video material list first is obtained from video server, then Video material addition is selected from the video material list again and enters track of video.After video material is added to track of video, Video attribute file corresponding with the video material will be generated, integrated control platform is controlled by editor's video attribute file to be sent To the instruction of video server, rather than call directly or edit the corresponding source file of video material, it is ensured that the safety of source file Property and integrated control platform stability.
S404: editor video material attribute, the video material attribute include start position, final position, the time started, End time, total duration, play time length.Video material attribute is similar with audio material attribute, while audio material can also To carry out transverse shifting, cutting and concatenation, or increases on integrated control platform and adjust one group corresponding with track of video Physical operation key, to manually adjust the attribute of video material by physical operation key.
S405: saving data, or according to track of video attribute, the attribute generation of video material corresponds to source document to video material The control instruction of part, and control and acoustic image, audio effect processing control are played out according to source file of the control instruction to video material System.Similar with track of video, attribute, the attribute of video material of specific control instruction audio track are corresponding.
As depicted in figure 5 and figure 12, the multi-specialized collaborative editing of the present embodiment and control method are also an option that increase light It controls (corresponding with following lighting control modules), specifically includes the following steps:
S501: addition light track, (in the display interface) addition is parallel and is aligned in the light track 3 of the time shaft (region), the light track correspond to a controlled plant, use light network signal adapter (such as Artnet net in the present invention Card).
S502: editor's light track attribute, editable light track attribute include that lock on track, track are mute.Light Track attribute is similar with audio track attribute.
S503: addition light material adds one or more light materials 31,32,33 in light track, and in light Light material corresponding with the light material, the length and the light of light track occupied by the light material are generated in track The total duration of material matches.Similar with audio material, video material, there is no load light materials for light track, and are Property file corresponding with the light material source file is generated, control instruction is issued by property file to control light material source The output of file.
Light material is the light network control data of certain time length, such as Artnet data, Artnet data envelope Equipped with DMX data.Light material can generate in the following manner: after the good light program of conventional lights control platform layout, it is logical to integrate control platform The light network interface that its light network interface is connected on conventional lights control platform is crossed, the signal light control letter of lamp control platform output is recorded Number, while integrated control platform needs to stamp timing code to the light controling signal recorded in recording process, so as in light rail The enterprising edlin control in road.
S504: editor light material attribute, the light material attribute include start position, final position, the time started, End time, total duration, play time length.Light material attribute is similar with audio material attribute, while audio material can also To carry out transverse shifting, cutting and concatenation, or increases on integrated control platform and adjust one group corresponding with light track Physical operation key, to manually adjust the attribute of light material by physical operation key.
S505: saving data, or according to light track attribute, the attribute generation of light material corresponds to source document to light material The control instruction of part, and control and acoustic image, audio effect processing control are played out according to source file of the control instruction to video material System.Similar with track of video, attribute, the attribute of video material of specific control instruction audio track are corresponding.
As shown in Fig. 6 and Figure 12, the multi-specialized collaborative editing and control method of the present embodiment are also an option that increase device It controls (corresponding with following apparatus control module), specifically includes the following steps:
S601: adding set track, (in the display interface) addition are parallel to one or more devices of the time shaft Track 5 (region), the corresponding controlled device of each described device track, such as mechanical device.It is needed before adding set track Confirm controlled device and integrated control platform establishes connection.It is integrated to control platform and connection be established by TCP by control device, Such as integrated control platform is arranged to TCP server, each controlled device is arranged to TCP Client, the TCP client termination of controlled device The integrated TCP server for controlling platform is actively connected to after entering network.
S602: editing device track attribute, editable device track attribute include that lock on track, track are mute.Device Track attribute is similar with audio track attribute, if device track selects mute, the attached control sub-track of the whole of device track Any operation is not executed.
S603: addition control sub-track adds one or more sub- rails of control corresponding with a wherein described device track Road, each control sub-track parallel (and for) described time shaft, the corresponding described device of each control sub-track Controlled plant corresponding to track is corresponding.
S604: addition control material according to the control material of the type addition respective type of control sub-track, and is being added It is generated on the control sub-track added and controls sub- material accordingly, control sub-track length occupied by the control material and the control The total duration of material matches.
The type of control sub-track includes TTL control sub-track, relay control sub-track, network-control sub-track, phase It answers, may be added to that the control material of TTL control sub-track includes (such as TTL high level control element of TTL material 511,512,513 Material, TTL low level control material), may be added to that relay sub-track control material include relay material 521,522, 523,524 (such as relay opens control material, relay closes control material), may be added to that the control material of network-control sub-track Including network materials 501,502,503 (such as TCP/IP communication control material, UDP communication control material, 232 communication control materials, 485 protocol communications control material etc.).Sub- material is controlled accordingly by addition, and capable of emitting corresponding control instruction controls son element Material is substantially exactly control instruction.
S605: the sub- material attribute of editor control, attribute include start position, final position, total duration.By adjusting (horizontal To movement) position of the sub- material of control in accordingly control sub-track can change start position and final position, but play point Setting will not change with the relative position of final position on a timeline, i.e., the length of audio material will not change.Control material Start position is the time shaft moment begun to send out with the corresponding control instruction of control material to corresponding controlled device, is terminated Position is to terminate the time shaft moment for sending control instruction.
Further, incidence relation can also be set between the control material in same control sub-track, makes to be located at The point position corresponding time shaft moment controls the corresponding control command of material earlier and is not carried out success, then (collection will not be issued At control platform) or do not execute the later corresponding control of association control material of (controlled device) start position corresponding time shaft moment Instruction, such as folding, the elevating control of curtain.
Further, the guard time for controlling the settable certain time length in control material front and back of sub-track, i.e., in control Track cannot add control material in guard time or cannot issue control command.
S606: saving data, or according to control track and its attribute of control sub-track, the attribute for controlling material generates control System instruction, and the control instruction is sent to corresponding controlled device.
In addition, the present embodiment also provides a kind of multi-specialized system editor and control system (performance integrated control system), such as Shown in Fig. 7, which includes integrated control platform 70, and selection includes audio server 76, video server 77, signal light control mould Block 78 and device control module 79.Wherein, the integrated control platform 70 includes Multi-track editing playback module 71, and the Multi-track editing returns Above-mentioned performance integrated control control method sound intermediate frequency control, video control, signal light control and device control can be performed in amplification module 71 One of or various control, details are not described herein for concrete implementation step.More rail playback editor control modules include sound Frequency control module 72, and selection include video control module 73, lighting control module 74 and device control module 75.
As shown in figure 8,72 pieces of the audio frequency control mould include audio track adding module 81, audio track attributes edit mould Block 82, audio material adding module 83, audio material attributes edit module 84, audio sub-track adding module 85, holding data/ Audio frequency control instruction module 86 is exported, the function that these modules are realized is corresponded with abovementioned steps S201 to S206 respectively, Details are not described herein, similarly hereinafter.
Further, the audio of the multi-specialized system editor and control system broadcasting control principle is as shown in figure 13, institute Integrated control is stated to further include quick playback editor module, be physically entered module, more rails playback editor module, quick playback editor's mould Real-time edition audio material is used for described in block, and it to be corresponding to the broadcasting audio material of audio server 76 to issue corresponding control instruction Source file, the physical operations key for being physically entered module and corresponding on integrated control platform 71, for external input set at control The source of sound of platform carries out real-time tuning control.
Correspondingly, being equipped with audio mixing matrix module, track matrix module, 3x1 in the audio server exports mix module With physics output module, the audio mixing matrix module be can receive from the quick playback editor module, more rails playback editor's mould Audio signal that the audio source file in the audio server that block is called by control command form exports and described It is physically entered the audio signal of module output, the similar track matrix module also can receive above-mentioned each road audio input.Institute Audio mixing matrix is stated for exporting after carrying out stereo process to each road audio input to the output mix module, the track matrix Module is used to after carrying out acoustic image trajectory processing to each road audio input export to the output mix module.The output audio mixing mould Block can receive from audio mixing matrix module, track matrix module and the audio output for being physically entered module, after 3x1 stereo process Each physics output interface output through the physics output module.Wherein, acoustic image trajectory processing refers to according to acoustic image track number It is adjusted according to the level of output to each speaker entity, makes the acoustic image of speaker physical system within the period of setting length It runs or remains stationary along setting path.
In the present embodiment, the source file of audio material is stored on the audio server outside integrated control platform, and more rails return The source file that editor module did not called directly and handled audio material is put, and only handles category corresponding to the audio source file Property file, pass through editor adjustment source file property file, addition/editor's acoustic image material and audio track and its sub-track Attribute realization audio source file is indirectly controlled, therefore the corresponding output channel output of each audio track only for Then control signal/instruction of audio source file executes audio source file by receiving the audio server of the control instruction again Various processing.
As shown in figure 14, more rail playback editor modules receive effective audio material list from audio server 76, Audio source file is not handled directly, and audio source file is stored in the audio server, is receiving corresponding control command After recall audio source file and carry out various audio effect processings, such as stereo process is carried out into audio mixing matrix module, into track Matrix module carries out trajectory processing.Acoustic image material is actually also control command, can both be stored in integrated control platform 71, can also be with It is uploaded to audio server.
As shown in figure 9, the video control module 73 includes track of video adding module 91, track of video attributes edit mould Block 92, video material attributes edit module 94, keeps data/output video control instruction module at video material adding module 93 95, the function that these modules are realized is corresponded with abovementioned steps S401 to 405 respectively.
Further, the video editing of the multi-specialized system editor and control system and broadcasting control principle such as Figure 15 institute Showing, the integrated control platform does not execute the source file of video material directly, but by acquisition video material list and accordingly Property file issue control instruction to video server, video server connects the source document further according to control instruction to video material Part executes broadcasting and effect operation.
As shown in Figure 10, the lighting control module 74 includes light track adding module 110, light track attributes edit Module 120, light material attributes edit module 140, keeps data/output signal light control instruction at light material adding module 130 Module 150, the function that these modules are realized are corresponded with abovementioned steps S501 to 505 respectively.
Further, the signal light control principle of the multi-specialized system editor and control system is as shown in figure 16, the collection Light signal is additionally provided at control platform and records module, for recording the light controling signal of lamp control platform output, and in recording process Timing code is stamped to the light controling signal recorded, so as in the enterprising edlin control of light track.
As shown in figure 11, described device control module 75 includes device track adding module 151, device track attributes edit Module 152, control material adding module 154, control material attributes edit module 155, is protected at control sub-track adding module 153 Hold data/output signal light control instruction module 156, the function that these modules are realized respectively with abovementioned steps S601 to 606 1 One is corresponding.
Further, the device control principle of the multi-specialized system editor and control system is as shown in figure 17, the collection It is exported through each protocol interface on device adapter to corresponding controlled plant at all kinds of devices control signal of control platform output.
In addition, the integrated control platform can also include the sound for making (generation) acoustic image track data (i.e. acoustic image material) As track data generation module, the acoustic image track data obtained through the module is for more rail playback editor's execution module tune With controlling to control audio server track matrix module acoustic image track.Further, the present embodiment provides A kind of acoustic image track (sound effect track) control method, the control method pass through control host (such as integrated control platform, audio server) The output level value of each speaker node of entity sound box system is configured, makes acoustic image in the total duration of setting by the side of setting Formula movement or static.As shown in figure 18, which includes:
S181: acoustic image track data is generated;
S182: in the total duration corresponding to the acoustic image track data, according to acoustic image track data, it is real to adjust each speaker The output level of body;
S183: in the total duration, the incoming level and corresponding speaker entity of each speaker physical signal will be input to Output level is overlapped to obtain the level of each speaker entity reality output.
Acoustic image track data refers within the period of setting length (i.e. the lasting total duration of acoustic image), in order to make integrated control Each virtual speaker node output level is formed by acoustic image and runs along preset path in virtual speaker distribution map on platform It moves or remains stationary, the output level data that each speaker node changes over time.I.e. acoustic image track data contains speaker distribution Output level delta data of whole speaker nodes in the setting length of time section in map.Each speaker node is come Say, its output level size is changed with time change in the set period of time, it is also possible to be zero, negative even Bear it is infinite, it is preferential infinite using bearing.
Each speaker node corresponds to a speaker entity in entity sound box system, and each speaker entity includes being located at together One or more speakers at one position.I.e. each speaker node can correspond to one or more co-located speakers. The virtual speaker of each sound in order to allow entity sound box system accurately to reappear acoustic image path, in speaker distribution map The position distribution of node should be corresponding with each speaker provider location distribution of entity sound box system, in particular so that each speaker node it Between relative positional relationship, the relative positional relationship between each speaker entity is corresponding.
The level of speaker entity reality output is real with the speaker in the level and above-mentioned acoustic image track data of input signal The output level of the corresponding speaker node of body is superimposed gained.The former be input signal characteristic, the latter can be considered as speaker reality The characteristic of body itself.At any one time, different input signals just has different incoming levels, and real for same speaker Body, only one output level.It is, therefore, understood that acoustic image trajectory processing is at the output level to each speaker entity Reason, to form preset acoustic image path effect (including acoustic image is stationary).
Incoming level and the output level superposition of speaker entity can be before audio signal actually enter speaker entity first It being handled, can also be handled again after entering speaker entity, this link for depending on entire public address system is constituted, with And whether speaker entity is built-in with audio-frequency signal processing module, such as DSP unit.
The type of acoustic image track data includes: fixed point audio-visual-data, becomes rail acoustic image track data and variable domain acoustic image track.? When simulation generates acoustic image track data on integrated control platform, the speed and process of acoustic image are controlled for convenience, the present invention is real Sequentially connected line segment is applied between several acoustic image TRAJECTORY CONTROL points that example passes through discrete distribution in speaker distribution map to indicate sound The path of running of picture determines the path of running of acoustic image, Yi Jisheng by several acoustic image TRAJECTORY CONTROL points of discrete distribution The overall running time of picture.
Acoustic image is pinpointed, is referred within the period of setting length, the one or more speakers selected in speaker distribution map Node constantly output level, and unselected speaker node output level numerical value is zero or bears infinite situation.Correspondingly, fixed Point audio-visual-data refers within the period of setting length, the one or more speaker nodes selected in speaker distribution map are held Continuous ground output level, and unselected speaker node is when output level or output level numerical value are not zero or bear infinite, Ge Geyin The output level data that case node changes over time.For selected speaker node, its output level is in the setting time Continuously (there may also be upper and lower fluctuating change);And for unselected speaker node, its output level in the setting time Holding is negative infinite.
Become rail acoustic image, refers within the period of setting length, in order to make acoustic image run along preset path, each speaker node According to the situation of certain rule output level.Correspondingly, become rail acoustic image track data, refer within the period of setting length, In order to make acoustic image run along preset path, output level data that each speaker node changes over time.Acoustic image runs path simultaneously Do not need it is exactly accurate, and acoustic image movement (running) duration will not be very long, it is only necessary to substantially construction audience can recognize Acoustic image run effect.
Variable domain acoustic image referred within the period of setting length, in order to make acoustic image run along predeterminable area, each speaker node The situation that changes according to certain rule of output level.Correspondingly, variable domain acoustic image track data referred in the time of setting length, In order to make acoustic image run along predeterminable area, output level data that each speaker node changes over time.
As shown in figure 19, the variable domain acoustic image track data of the present embodiment can obtain by the following method:
S1901: setting speaker node: in speaker distribution map 10, speaker node is added or deleted.
S1902: modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to Road, initialize level, speaker title etc..Speaker node is indicated in speaker distribution map with speaker icon, passes through mobile sound Case icon can change its coordinate position.Speaker type refers to that full-range cabinet or ultralow frequency speaker, concrete type can be according to reality It is divided.Each speaker node in speaker distribution map is all assigned an output channel, each output channel pair It should include one or more sounds at co-located place in a speaker entity in entity sound box system, each speaker entity Case.I.e. each speaker node can correspond to one or more co-located speakers.In order to reappear in speaker distribution map Designed acoustic image is run path, and the position distribution of speaker entity should be with the position distribution pair of speaker node in speaker distribution map It answers.
S1903:, which dividing acoustic image region, and sets acoustic image runs path:
In speaker distribution map select certain point centered on S0, and the S0 of center at addition speaker node, then with Center S0 is that the center of circle divides several concentric circular regions, and diameter maximum concentric circles can completely or partially cover speaker distribution ground Speaker node on figure.The smallest concentric circles area encompassed of diameter is set as acoustic image region Z1, between neighboring concentric circle Region is each set to acoustic image region Z2, Z3, Z4 ... ZN (N is the natural number more than or equal to 2), i.e. diameter most from inner outward Region between small concentric circles and diameter small concentric circles second from the bottom is set as acoustic image region Z2, and diameter is second from the bottom small Region between concentric circles and diameter small concentric circles third from the bottom is set as acoustic image region Z3, and so on (refering to Figure 20).
It is arranged using center S0 as multiple paths of running of the outside radiant type diffusion of starting point, it is each that path of respectively running is set as traversal The straight-line segment in a acoustic image region.These paths of running can completely or partially cover all or part in above-mentioned concentric circular regions Speaker node preferably covers whole speaker nodes in above-mentioned concentric circular regions.
The terminal in these paths of running be above-mentioned concentric circular regions within or speaker node in addition, the starting point in path of running For center S0, and terminal is different according to speaker Node distribution on the path direction of running: if (1) path direction of running On have speaker node on the outside of diameter maximum concentric circles, then the terminal in the path of running is to run on path direction at this, Apart from the smallest speaker node on the outside of diameter maximum concentric circles and with the diameter maximum concentric circles.(2) path if this is run The speaker node being not on the outside of diameter maximum concentric circles on direction, then the terminal in the path of running is the path direction of running On the speaker node farthest apart from center S0.
These at least one speaker nodes of path of running, can also there are two or above speaker node.
Refering to Figure 20, which is equipped with four concentric circles centered on S0, this four concentric circles divide At 4 acoustic image regions Z1, Z2, Z3, Z4, the acoustic image of setting path of running has 13, respectively S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11, S12, S13, whole speaker nodes in concentric circular regions (region that diameter maximum concentric circles is included) There is path of running accordingly to pass through.Wherein, path S2, S3, S5, S8, S9, S10, S11, S12, S13 are run just through one Speaker node, run path S1, S4, S7 are across two speaker nodes.
Refering to Figure 21, further, when centered on selected certain point, a certain speaker node conduct can be directly selected Center.The speaker distribution map of Figure 21 is equipped with 5 concentric circles centered on speaker node S0 ', is divided into 5 acoustic image regions Z1 ', Z2 ', Z3 ', Z4 ', Z5 ' and several acoustic images are run path.Wherein, acoustic image run path S1 ', S2 ', have on S3 ' respectively 3 speaker nodes, at current time, run path S1 ', S2 ', acoustic image tracing point P41, P42, P43 on S3 ' of acoustic image is respectively positioned on In the Z4 ' of acoustic image region.
S1904: editor's acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region arrives down The time required to one acoustic image region and acoustic image is run total duration.The editor of acoustic image area attribute and change rail acoustic image tracing point attribute It edits similar.If modifying at the time of to corresponding to a certain acoustic image region, whole acoustic image regions before the sound area domain are each The total duration that self-corresponding moment and acoustic image are run requires to be adjusted.If to a certain acoustic image region to next acoustic image area Time needed for domain is adjusted, then at the time of corresponding to next acoustic image region and acoustic image run total duration require into Row adjustment.If modification acoustic image is run total duration, then at the time of the acoustic image is run corresponding to each acoustic image region on path and It will all be adjusted to the time needed for next acoustic image region.
S1905: record variable domain acoustic image track data records each speaker node and successively runs in acoustic image along path of running is set During each acoustic image region, the output level numerical value at each moment.Certain can be calculated by the following method for the moment Carve the output level value of related speaker node.
Assuming that: acoustic image total duration of running is set as T, and the acoustic image tracing point respectively run on path is in total duration T from it Point is mobile to terminal, or mobile from its terminal to starting point, and the acoustic image movement speed in path of respectively running can be identical or not identical.
At a time t, a certain acoustic image run the current acoustic image tracing point Pj in path P along some acoustic image region Zi to Next acoustic image region Zi+1 is mobile.Adjacent tone on the route that the acoustic image runs path P, inside and outside acoustic image tracing point Pj Case node k, speaker node k+1 output level be respectively dBm, dBm+1, which runs in path P except the two speaker sections The output level of speaker node other than point is zero or bears infinite.Speaker node k, speaker node k+1 are positioned at the acoustic image It runs speaker node on path, wherein the speaker node K is speaker node on the inside of acoustic image tracing point Pj (in The side of heart S0), which is the speaker node (side far from center S0) on the outside of the acoustic image track.
So, in current a certain moment t, which runs the acoustic image tracing point Pj in path P from current acoustic image region Zi During running to next acoustic image region Zi+1,
The output level dBm=10log of speaker node keη÷2.3025851
The output level dBm+1=10log of speaker node k+1eβ÷2.3025851
Wherein, l12For speaker node k to the distance of speaker node k+1, l1PFor speaker node k to current acoustic image tracing point The distance of Pj, lp2For the distance of current acoustic image tracing point Pj to speaker node k+1.As can be seen from the above formula that road of respectively running There are two speaker node output levels for each acoustic image tracing point on diameter, but when acoustic image tracing point is located just at speaker node When, only one of which speaker node output level.
Refering to Figure 20, which is provided with 4 using center S0 as four concentric circles in the center of circle, sets at the S0 of center There is speaker node.These concentric circles are divided into 4 acoustic image regions Z1, Z2, Z3, Z4, and be provided with 13 acoustic images run track S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11, S12, S13, wherein acoustic image is run on path S1, S4, S7, S10 There are 3 speaker nodes respectively, remaining acoustic image is run on path then only has 2 each speaker nodes respectively.
At current time, run current acoustic image tracing point P31, P32, P33, P34 of path S1, S2, S3, S4 of acoustic image runs Into acoustic image region Z4.If each acoustic image is run, the acoustic image movement speed on path is identical, these current acoustic image tracing points are located at Using S0 as in the circle in the center of circle.At this point, run on path in each acoustic image, the sound inside and outside only current acoustic image tracing point Case node has output level, and the output level of remaining speaker node is zero or bears infinite.By taking current acoustic image track P31 as an example, The adjacent speaker node in inside is speaker node (the speaker node use in Figure 20 in the upper right side in the Z2 of acoustic image region in Figure 20 Small circle indicates), the speaker node of outside jingle bell is the speaker node in the upper right side on the outside of the Z4 of acoustic image region, this speaker node And the acoustic image is run the terminal of path S1.Only have both of the aforesaid speaker node to have level defeated on the S1 of path at this point, acoustic image is run Out, the output level for being located at the speaker node of center S0 is zero or bears infinite.
When recording variable domain acoustic image track output level value of each moment each speaker node in total duration T, Ke Yilian Continuous record, can also record according to certain frequency.For the latter, refers to and record primary each speaker section at interval of certain time The output level numerical value of point.In the present embodiment, acoustic image is recorded in using 25 frames/second or 30 frames/second frequency along setting road The output level value of each speaker node when diameter is run.The output level data that each speaker node is recorded by certain frequency, can subtract Few data volume accelerates the processing speed when carrying out acoustic image trajectory processing to input audio signal, guarantees that acoustic image is run effect Real-time.

Claims (2)

1. a kind of panorama multichannel sound effect method for controlling trajectory, the control method is by console to each speaker of entity sound box system The output level value of node is configured, and moves acoustic image in the way of setting in the total duration of setting or static, feature It is, which includes:
Acoustic image track data is generated, the acoustic image track data includes variable domain acoustic image track data;
In the total duration corresponding to the acoustic image track data, according to acoustic image track data, the output of each speaker entity is adjusted Level;
In the total duration, by be input to each speaker physical signal incoming level and corresponding speaker entity output level into Row superposition obtains the level of each speaker entity reality output;The variable domain acoustic image track data obtains by the following method:
Setting speaker node: in speaker distribution map, speaker node is added or deleted;
Modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponding output channel, initialization Level;
It divides acoustic image region and sets acoustic image and run path;
Edit acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region to next acoustic image region Required time and acoustic image are run total duration;
Variable domain acoustic image track data is recorded, each speaker node is recorded and successively runs along path of running is set by each sound in acoustic image During region, the output level numerical value at each moment;Divide acoustic image region and set acoustic image run path when:
The S0 centered on selected certain point in speaker distribution map, and speaker node is added at the S0 of center, then in this Heart S0 is that the center of circle divides several concentric circular regions, and diameter maximum concentric circles completely or partially covers the sound in speaker distribution map Case node, the smallest concentric circles area encompassed of diameter are set as acoustic image region Z1, the region between neighboring concentric circle is from inner It is each set to acoustic image region Z outward2、Z3、Z4…ZN, N is the natural number more than or equal to 2;
It is arranged using center S0 as multiple paths of running of the outside radiant type diffusion of starting point, path of respectively running is set as traversing each sound As the straight-line segment in region, these paths of running can completely or partially cover all or part of speaker in the concentric circular regions Node;
During recording variable domain acoustic image track data, the acoustic image tracing point respectively run on path is run in total duration T in acoustic image It is mobile from its starting point to terminal or mobile from its terminal to starting point, the acoustic image movement speed in path of respectively running can it is identical or It is not identical;
Assuming that at a time t, a certain acoustic image run the current acoustic image tracing point Pj in path P along some acoustic image region ZiTo Next acoustic image region Zi+1It is mobile, the adjacent tone on the route that the acoustic image runs path P, inside and outside acoustic image tracing point Pj Case node K, speaker node K+1 output level be respectively dBm、dBm+1, which runs in path P except the two speaker nodes The output level of speaker node in addition be zero or bear it is infinite, speaker node K, speaker node K+1 be positioned at the acoustic image run Speaker node on dynamic path;Wherein, which is the speaker node on the inside of acoustic image tracing point Pj, the speaker section Point K+1 is the speaker node on the outside of the acoustic image track;
So, a certain moment t in this prior, the acoustic image run the acoustic image tracing point Pj in path P from current acoustic image region ZiIt runs To next acoustic image region Zi+1During,
The output level dB of speaker node Km=10logeη÷2.3025851;
The output level dB of speaker node K+1m+1=10logeβ÷2.3025851;
Wherein, l12For speaker node K to the distance of speaker node K+1, l1PFor speaker node K to current acoustic image tracing point Pj away from From lp2For the distance of current acoustic image tracing point Pj to speaker node K+1.
2. panorama multichannel sound effect method for controlling trajectory according to claim 1, which is characterized in that dividing acoustic image region And set acoustic image run path when:
The terminal in each path of running is within the concentric circular regions or speaker node in addition, starting point are the center S0:(1) if having the speaker node on the outside of diameter maximum concentric circles, the path of running on the path direction of running Terminal is to be located on the outside of diameter maximum concentric circles and with the diameter maximum concentric circles on the path direction of running apart from the smallest Speaker node;(2) the speaker node being not on path direction if this is run on the outside of diameter maximum concentric circles, the road of running The terminal of diameter is the speaker node that distance center S0 is farthest on the path direction of running.
CN201511028156.1A 2015-12-31 2015-12-31 Panorama multichannel sound effect method for controlling trajectory Active CN106937204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511028156.1A CN106937204B (en) 2015-12-31 2015-12-31 Panorama multichannel sound effect method for controlling trajectory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511028156.1A CN106937204B (en) 2015-12-31 2015-12-31 Panorama multichannel sound effect method for controlling trajectory

Publications (2)

Publication Number Publication Date
CN106937204A CN106937204A (en) 2017-07-07
CN106937204B true CN106937204B (en) 2019-07-02

Family

ID=59442001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511028156.1A Active CN106937204B (en) 2015-12-31 2015-12-31 Panorama multichannel sound effect method for controlling trajectory

Country Status (1)

Country Link
CN (1) CN106937204B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140017684A (en) * 2011-07-01 2014-02-11 돌비 레버러토리즈 라이쎈싱 코오포레이션 System and tools for enhanced 3d audio authoring and rendering
CN104754445A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel acoustic image trajectory control method
CN104967952A (en) * 2015-06-30 2015-10-07 大连理工大学 Personalized method based on HRTF structural model and subjective feedback

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140017684A (en) * 2011-07-01 2014-02-11 돌비 레버러토리즈 라이쎈싱 코오포레이션 System and tools for enhanced 3d audio authoring and rendering
CN104754445A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel acoustic image trajectory control method
CN104967952A (en) * 2015-06-30 2015-10-07 大连理工大学 Personalized method based on HRTF structural model and subjective feedback

Also Published As

Publication number Publication date
CN106937204A (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN104754178B (en) audio control method
CN106937022B (en) multi-professional collaborative editing and control method for audio, video, light and machinery
US10541003B2 (en) Performance content synchronization based on audio
CN108021714A (en) A kind of integrated contribution editing system and contribution edit methods
CN104754186B (en) Apparatus control method
CN110225224B (en) Virtual image guiding and broadcasting method, device and system
CN106937021B (en) performance integrated control method based on time axis multi-track playback technology
CN104750059B (en) Lamp light control method
CN104750058B (en) Panorama multi-channel audio control method
CN106937023B (en) multi-professional collaborative editing and control method for film, television and stage
CN104750051B (en) Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image
CN104754244B (en) Panorama multi-channel audio control method based on variable domain audio-visual effects
CN104754242B (en) Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image
CN104754243B (en) Panorama multi-channel audio control method based on the control of variable domain acoustic image
Hamasaki et al. 5.1 and 22.2 multichannel sound productions using an integrated surround sound panning system
CN106937204B (en) Panorama multichannel sound effect method for controlling trajectory
CN108073717A (en) A kind of contribution editing machine based on control editor
CN106937205B (en) Complicated sound effect method for controlling trajectory towards video display, stage
US11368806B2 (en) Information processing apparatus and method, and program
CN104751869B (en) Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image
CN104750055B (en) Based on the panorama multi-channel audio control method for becoming rail audio-visual effects
CN106851331A (en) Easily broadcast processing method and system
CN104754241B (en) Panorama multi-channel audio control method based on variable domain acoustic image
CN109348390A (en) A kind of immersion panorama sound electronic music diffusion system
Oğuz et al. Creative Panning Techniques for 3D Music Productions: PANNERBANK Project as a Case Study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant