CN104754243A - Panoramic multichannel audio frequency control method based on variable domain acoustic image control - Google Patents
Panoramic multichannel audio frequency control method based on variable domain acoustic image control Download PDFInfo
- Publication number
- CN104754243A CN104754243A CN201310754855.9A CN201310754855A CN104754243A CN 104754243 A CN104754243 A CN 104754243A CN 201310754855 A CN201310754855 A CN 201310754855A CN 104754243 A CN104754243 A CN 104754243A
- Authority
- CN
- China
- Prior art keywords
- acoustic image
- audio
- track
- audio amplifier
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Management Or Editing Of Information On Record Carriers (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Abstract
The invention relates to the control technology of entertainment equipment, and specifically relates to a panoramic multichannel audio frequency control method based on variable domain acoustic image control. The method comprises the following steps: displaying a timer shaft on a display interface of an integrated console; adding and/or deleting tracks for controlling the corresponding entertainment equipment, wherein the tracks include lamplight tracks; editing track attributes; adding materials; editing material attributes; and sending corresponding control instructions by the integrated console based on each track attribute and the material attribute thereof. The panoramic multichannel audio frequency control method based on the variable domain acoustic image control can solve the technical problems in the edit and synchronous control as well as the acoustic image running effect control for the existing programs.
Description
Technical field
The present invention relates to device for performing arts control technology, specifically a kind of panorama multi-channel audio control method controlled based on variable domain acoustic image.
Background technology
In items on the program layout process, one is compared distinct issues is coordination between each specialty (referring to audio frequency, video, light, machinery etc.) and Synchronization Control.In large-scale performance, each specialty is relatively independent, needs a huger troop could ensure smooth layout and the performance of performance.And in the process of each professional program of layout, the most of the time all spend coordination between specialty and synchronous above, and much more possibly compared with the time be really absorbed at program itself.
Because each specialty is relatively independent, control mode differs greatly.To carry out on-the-spot audio-visual synchronization editor, video is controlled by lamp control platform, audio frequency is by many rails playback editor control, audio frequency is easy to navigate to the arbitrary time and starts playback, but video can only start anew (manually can adjust to correspondence position by frame number by operating personnel, but timing code can not be followed to start), this concerning scene performance control, be short of enough flexibilities.
In addition, after the audio amplifier position of the Professional sound box system of existing video display, stage is fixing, to be amplified by the left and right passage master of stage both sides or acoustic image is roughly set in the middle position of stage by main the amplifying in left, center, right three road, although performance site is except the master on stage amplifies, the audio amplifier of One's name is legion is also provided with in each position, but whole field is performed, the acoustic image of sound box system almost seldom can change.
Therefore, solving the run flexible control of effect of the editor of current items on the program and Synchronization Control and acoustic image is all the art key technical problem urgently to be resolved hurrily.
Summary of the invention
The technical problem that the present invention solves is to provide a kind of occasion such as video display, stage that simplifies and performs multi-specialized control and can carry out the panorama multi-channel audio control method controlled based on variable domain acoustic image of flexible quick-setting to the acoustic image of sound reinforcement system.
For solving the problems of the technologies described above, the technical solution used in the present invention is: a kind of panorama multi-channel audio control method controlled based on variable domain acoustic image, comprising:
Add audio track, display interface add parallel and be aligned in one or more audio tracks of described time shaft, the corresponding output channel of each described audio track;
Editor's audio track attribute;
Add audio material, one or more audio material is added in audio track, and the audio material icon corresponding with this audio material is generated in audio track, this length of audio track occupied by audio material icon and total duration of this audio material match;
Editor's audio material attribute, described audio material attribute comprises start position, final position, time started, end time, total duration, reproduction time length;
Add audio frequency sub-track, add the one or more audio frequency sub-tracks corresponding with wherein audio track described in, each described audio frequency sub-track is parallel to described time shaft, the output channel of the described audio track that described audio frequency sub-track is corresponding with it is corresponding, and the type of described audio frequency sub-track comprises acoustic image sub-track;
Add acoustic image sub-track harmony pixel material, one or more acoustic image material is added in acoustic image sub-track, and the acoustic image material icon corresponding with this acoustic image material is generated in this acoustic image sub-track, this length of acoustic image sub-track occupied by acoustic image material icon and the total duration corresponding to this acoustic image material match;
Editor's acoustic image sub-track attribute;
Editor's acoustic image material attribute, described acoustic image material attribute is bag start position, final position, time started, end time, total duration, reproduction time length also;
Described acoustic image material is acoustic image track data, and described acoustic image track data comprises variable domain acoustic image track data and audio amplifier link data, wherein:
Described variable domain acoustic image track data obtains by the following method:
Audio amplifier node is set: in audio amplifier distribution map, adds or delete audio amplifier node;
Amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level;
Setting acoustic image is run path divide acoustic image region: in audio amplifier distribution map, arrange multiple acoustic image region, each acoustic image region comprises several audio amplifier nodes, and arranges the path of running in each acoustic image region of traversal;
Editor's acoustic image attribute zone time, comprises the moment corresponding to acoustic image region, to run total duration to next acoustic image region required time and acoustic image in current acoustic image region;
Record variable domain acoustic image track data, records each audio amplifier node and runs at acoustic image successively in the process in each acoustic image region along setting path of running, the output level numerical value in each moment.
Described audio amplifier link data obtains as follows:
Audio amplifier node is set: in audio amplifier distribution map, adds or delete audio amplifier node;
Amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level;
Audio amplifier node link relation is set: selected ultralow frequency audio amplifier node is connected to neighbouring multiple full-range cabinet nodes;
Record audio amplifier link data: calculate and record the output level DerivedTrim of described ultralow frequency audio amplifier, this output level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10 (Trim-i+LinkTrim-i)/10, wherein to be the output level value of arbitrary described full-range cabinet node i self be Trim-i, what LinkTrim-i was the former setting of full-range cabinet node i described in this and ultralow frequency audio amplifier links level, DeriveddB is the initialization level value of described ultralow frequency audio amplifier node, DerivedTrim is the output level value after described ultralow frequency audio amplifier node sets links to described some full-range cabinet nodes.
Compared with prior art, beneficial effect is as follows: the theory that playback master control embodies " performance integrated management ".From the angle of technology, it is very low that the lotus root of these unit closes property, and they can operating alone and not influencing each other, and uniquely relatively more outstanding contact is " time ", namely when what is broadcasting.From the angle that user uses, the relation of " time " is but the thing that they are concerned about most.Carry out checking and management if the state of these unit can be concentrated in together, user will save many unnecessary troubles.As coordinated the stationary problem between unit, when editing saving, the mutual reference of each specialty is revised etc. with contrast.
Accompanying drawing explanation
Fig. 1 is the panorama multi-channel audio control method schematic diagram controlled based on variable domain acoustic image of embodiment.
Fig. 2 is the method schematic diagram of the audio frequency control part of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment.
Fig. 3 is the method for operation schematic diagram of the audio frequency sub-track of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment.
Fig. 4 is the method schematic diagram of the video control portions of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment.
Fig. 5 is the method schematic diagram of the signal light control part of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment.
Fig. 6 is the method schematic diagram that the apparatus control portion of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment is divided.
Fig. 7 is the principle schematic of the performance integrated control system of embodiment.
Fig. 8 is the principle schematic of the audio frequency control module of the performance integrated control system of embodiment.
Fig. 9 is the principle schematic of the video control module of the performance integrated control system of embodiment.
Figure 10 is the principle schematic of the lighting control module of the performance integrated control system of embodiment.
Figure 11 is the principle schematic of the device control module of the performance integrated control system of embodiment.
Figure 12 is many rails playback editor module interface schematic diagram of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment.
Figure 13 is the principle schematic of the audio frequency control part of the performance integrated control system of embodiment.
Figure 14 is the principle schematic of the track matrix module of the performance integrated control system of embodiment.
Figure 15 is the principle schematic of the video control portions of the performance integrated control system of embodiment.
Figure 16 is the principle schematic of the signal light control part of the performance integrated control system of embodiment.
Figure 17 is the principle schematic that the apparatus control portion of the performance integrated control system of embodiment is divided.
Figure 18 is the step schematic diagram of the change rail acoustic image method for controlling trajectory of embodiment.
Figure 19 is that the change rail acoustic image track data of embodiment generates method step schematic diagram.
Figure 20 is the audio amplifier distribution map of embodiment and becomes rail acoustic image track schematic diagram.
Figure 21 is the triangle audio amplifier node schematic diagram of embodiment.
Figure 22 is that the variable domain acoustic image track data of embodiment generates method step schematic diagram.
Figure 23 is audio amplifier distribution map and the variable domain acoustic image track schematic diagram of embodiment.
Figure 24 is that the fixed point acoustic image track data of embodiment generates method step schematic diagram.
Figure 25 is that the audio amplifier connection data of embodiment generates method step schematic diagram.
Figure 26 is the audio amplifier link schematic diagram of embodiment.
Embodiment
Below in conjunction with accompanying drawing, the acoustic image TRAJECTORY CONTROL that the present invention is all types of is further described.
The present embodiment provides a kind of occasion such as video display, stage that simplifies to perform multi-specialized control and can carry out the panorama multi-channel audio control method controlled based on variable domain acoustic image of flexible quick-setting to the acoustic image of sound reinforcement system.The method, by many rails playback editor module of integrated control platform, realizes the concentrated layout to multiple professional material and control.As shown in Figure 1, the described panorama multi-channel audio control method controlled based on variable domain acoustic image comprises the following steps:
S101: displaying time axle on the display interface of integrated control platform;
S102: add and/or delete the track being used for controlling corresponding performing device;
S103: editing rail attribute;
S104: add material;
S105: editor's material attribute;
S106: integrated control platform sends corresponding control command according to each track attribute and material attribute thereof.
As shown in Fig. 2 and Figure 12, the panorama multi-channel audio control method that should control based on variable domain acoustic image comprises for many rails voice reproducing control (corresponding with following audio frequency control module), specifically comprises the following steps:
S201: add audio track, display interface adds parallel and is aligned in one or more audio tracks (region) 1,2 of described time shaft, the corresponding output channel of each described audio track.
S202: editor's audio track attribute, editable audio track attribute comprises lock on track, track is quiet.The quiet attribute of track can control audio material on this track and all sub-tracks whether quiet, be the master control of audio track.Lock on track attribute can control except quiet and add and to hide except sub-track etc. outside individual attribute on track, and the material position in other attribute and audio track and material attribute all can not be revised.
S203: add audio material, one or more audio material 111,112,113,211,212,213,214 is added in audio track 1,2, and the audio material corresponding with this audio material is generated in audio track, the length of the audio track occupied by this audio material and total duration of this audio material match.Before interpolation audio material, first obtain audio material list from audio server, and then selected audio material interpolation enters audio track from this audio material list.After audio material is added to audio track, the audio attribute file corresponding with this audio material will be generated, integrated control platform controls by editor's audio attribute file the instruction sending to audio server, instead of directly call or edit source file corresponding to audio material, guarantee the fail safe of source file and the stability of integrated control platform.
S204: editor's audio material attribute, described audio material attribute comprises start position, final position, time started, end time, total duration, reproduction time length.Wherein, the time shaft moment of described start position corresponding to this audio material start position (vertically), the time shaft moment of described final position corresponding to this audio material final position (vertically), the described time started is that this audio material actual beginning on a timeline plays the moment, and the described end time is this audio material physical end play position on a timeline.Generally speaking, the time started can be delayed in start position, and the end time can in advance in final position.Total duration refers to the script time span of audio material, the time difference of start position position is to terminal total duration of audio material, reproduction time length refers to this audio material reproduction time length on a timeline, and the time difference of time started and end time is the reproduction time length of this audio material.The shearing manipulation to acoustic image material can be realized by adjustment time started and end time, namely only play the part that user wishes to play.
Can change start position and final position by the adjustment position of (transverse shifting) audio material in audio track, but start position and final position relative position on a timeline can not change, namely the length of audio material can not change.Audio material actual play time on a timeline and length thereof can be changed by adjusting time started of audio material and end time.Multiple audio material can be placed in an audio track, represent within the time period represented by time shaft, (through corresponding output channel) multiple audio material can be play successively.It should be noted that the audio material position (time location) in arbitrary audio track can freely adjust, but should be not overlapping between each audio material.
Further, because integrated control platform is that the property file corresponding to audio material controls, therefore integrated control platform can also cut off operation and concatenation to audio material.Cut off operation to refer to the audio material of on audio track is divided into multiple audio material, each audio material simultaneously after segmentation has each self-corresponding property file, now source file is still intact, and the property file that integrated control platform is new according to these sends control command and calls source file successively and carry out corresponding to play and audio operates.Similar, concatenation refers to and two audio materials is merged into an audio material, and its each self-corresponding property file merges into a property file, sends control audio server call two audio-source files by a property file.
Further, can also many groups application entity operation keys corresponding with each audio track be respectively set on integrated control platform, manually to be adjusted the attribute of audio material by physical operation key.Such as increase the material reproduction time adjustment knob to adjustment before and after the audio material position (time shaft position) in audio track.
S205: add audio frequency sub-track 12,13,14,15,21,22, add the one or more audio frequency sub-tracks corresponding with wherein audio track described in, each described audio frequency sub-track is parallel to described time shaft, and the output channel of the described audio track that described audio frequency sub-track is corresponding with it is corresponding.
Each audio track can have attached audio frequency sub-track, and the type of audio frequency sub-track comprises acoustic image sub-track and audio sub-track.Wherein, described acoustic image sub-track is used for carrying out the process of acoustic image track to the part or all of audio material of affiliated audio track, and described audio sub-track is used for carrying out audio effect processing to the part or all of audio material of affiliated audio track.In this step, following steps can be performed further:
S301: add acoustic image sub-track harmony pixel material, one or more acoustic image material 121,122 is added in acoustic image sub-track, and the acoustic image material corresponding with this acoustic image material is generated in this acoustic image sub-track, this length of acoustic image sub-track occupied by acoustic image material and the total duration corresponding to this acoustic image material match.
S302: editor's acoustic image sub-track attribute, similar with described audio track, editable acoustic image sub-track attribute comprises lock on track, track is quiet.
S303: editor's acoustic image material attribute, similar with described audio material, described acoustic image material attribute also comprises start position, final position, time started, end time, total duration, reproduction time length.
By the acoustic image material on acoustic image sub-track, can in the time period between this acoustic image material time started and end time, the process of acoustic image track is carried out to the signal that output channel corresponding to audio track belonging to this acoustic image sub-track exports.Therefore on acoustic image sub-track, add dissimilar acoustic image material, the signal that can export corresponding output channel carries out dissimilar acoustic image track process; And by the adjustment start position of each acoustic image material, end time, time started and end time, time and acoustic image path effect duration that the process of acoustic image track starts can be adjusted.
The difference of acoustic image material and audio material is, what audio material represented is voice data.Acoustic image track data is within the time period of preseting length, runs or keep motionless to make the acoustic image that in audio amplifier distribution map, each virtual audio amplifier node output level is formed, the time dependent output level data of each audio amplifier node along the path preset.Namely acoustic image track data contains the output level delta data of whole audio amplifier nodes within this preseting length time period in audio amplifier distribution map.The type of acoustic image track data comprises fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image track data, the type decided of the acoustic image track data type of acoustic image material, acoustic image corresponding to acoustic image track data total duration that moves determines time difference between acoustic image material start position and final position, i.e. total duration of acoustic image material.The process of acoustic image track refers to and to adjust according to the size of the acoustic image track data pair actual output level of each audio amplifier entity corresponding with each audio amplifier node, makes the acoustic image of audio amplifier physical system run along set path within the time period of preseting length or keep motionless.
S304: add audio sub-track, the type of described audio sub-track comprise volume and gain sub-track 13,22, EQ sub-track 14,15,21, each audio track can arrange a volume and gain sub-track, and one or more EQ sub-track.Wherein, described volume and gain sub-track are used for adjusting the signal level size of output channel corresponding to affiliated audio track, and described EQ sub-track is used for carrying out EQ audio effect processing to the signal of the output of output channel corresponding to affiliated audio track.
S305: the attribute of editor's audio sub-track, the attribute of described audio sub-track comprises outside the quiet and track identities of lock on track, track, also comprises the audio effect processing parameter corresponding with audio sub-track type.Such as, the audio effect processing parameter that volume and gain sub-track comprise is output level size adjustment parameter, and the audio effect processing parameter that EQ sub-track comprises is EQ process parameter.The audio of the corresponding output channel of audio track belonging to this audio sub-track can be adjusted by the sound effect parameters of amendment audio sub-track.
S206: preserve data, or according to the attribute of audio track and sub-track thereof, audio material harmony pixel material attribute generates the control command to the corresponding source file of audio material, and carries out Play Control according to this control command to the source file of audio material and acoustic image, audio effect processing control.
Control command comprises the audio-source file whether decision calls (broadcasting) audio material, the time started that source file is play and end time (being as the criterion with the moment of time shaft), the acoustic image of source file and audio effect processing, the attribute of the attribute of concrete control command and each audio track and attached sub-track thereof, audio material, acoustic image material is corresponding.That is audio track does not directly call the source file with processing audio material, and just process this property file corresponding to audio-source file, by the indirect control of attribute realization to audio-source file of the property file of editor's adjustment source file, interpolation/editor acoustic image material and audio track and sub-track thereof.
The audio material being such as added into audio track will enter playlist, and when this audio track starts broadcasting, this audio material will be played; By editor's audio track attribute, the quiet attribute of audio track can be controlled, this audio track and attached sub-track whether quiet (effectively) thereof can be controlled, by editor's audio frequency locking attribute, can control except quiet and add and to hide except sub-track etc. outside individual attribute on track, the material position in other attribute and audio track and material attribute all can not be revised (lock-out state).More detailed description can with reference to above describe.
As shown in Fig. 4 and Figure 12, the panorama multi-channel audio control method based on the control of variable domain acoustic image of the present embodiment can also be selected to increase video playback and control (corresponding with following video control module), specifically comprises the following steps:
S401: add track of video, (on display interface) adds parallel and is aligned in the track of video 4(region of described time shaft), the corresponding controlled plant of described track of video, adopts video server in the present invention.
S402: editing video track attribute, editable track of video attribute comprises lock on track, track is quiet.Track of video attribute and audio track Attribute class are seemingly.
S403: add video material, one or more video material 41,42,43,44 is added in track of video, and the video material corresponding with this video material is generated in track of video, the length of the track of video occupied by this video material and total duration of this video material match.Before interpolation video material, first obtain video material list from video server, and then selected video material interpolation enters track of video from this video material list.After video material is added to track of video, the video attribute file corresponding with this video material will be generated, integrated control platform controls by editing video property file the instruction sending to video server, instead of directly call or source file that editing video material is corresponding, guarantee the fail safe of source file and the stability of integrated control platform.
S404: editing video material attribute, described video material attribute comprises start position, final position, time started, end time, total duration, reproduction time length.Video material attribute and audio material Attribute class are seemingly, audio material also can carry out transverse shifting, cut off and concatenation simultaneously, or on integrated control platform, increase an adjustment group object operation keys corresponding with track of video, manually to be adjusted the attribute of video material by physical operation key.
S405: preserve data, or according to track of video attribute, the attribute of video material generates the control command to the corresponding source file of video material, and carries out Play Control according to this control command to the source file of video material and acoustic image, audio effect processing control.Similar with track of video, the attribute of concrete control command audio track, the attribute of video material are corresponding.
As depicted in figure 5 and figure 12, the panorama multi-channel audio control method based on the control of variable domain acoustic image of the present embodiment can also be selected to increase signal light control (corresponding with following lighting control module), specifically comprises the following steps:
S501: add light track, (on display interface) adds parallel and is aligned in the light track 3(region of described time shaft), the corresponding controlled plant of described light track, adopts light network signal adapter (as Artnet network interface card) in the present invention.
S502: editor's light track attribute, editable light track attribute comprises lock on track, track is quiet.Light track attribute and audio track Attribute class are seemingly.
S503: add light material, one or more light material 31,32,33 is added in light track, and the light material corresponding with this light material is generated in light track, this length of light track occupied by light material and total duration of this light material match.Similar with audio material, video material, light track does not load light material, and just generates the property file corresponding with this light material source file, sends by property file the output that control command controls light material source file.
Light material is the light network control data of certain hour length, and as Artnet data, this Artnet data encapsulation has DMX data.Light material is by generating with under type: after the good light program of conventional lights control platform layout, integrated control platform is connected to the light network interface on conventional lights control platform by its light network interface, record the light controling signal that this lamp control platform exports, integrated control platform needs to stamp timing code to recorded light controling signal in recording process simultaneously, to control at the enterprising edlin of light track.
S504: editor's light material attribute, described light material attribute comprises start position, final position, time started, end time, total duration, reproduction time length.Light material attribute and audio material Attribute class are seemingly, audio material also can carry out transverse shifting, cut off and concatenation simultaneously, or on integrated control platform, increase an adjustment group object operation keys corresponding with light track, manually to be adjusted the attribute of light material by physical operation key.
S505: preserve data, or according to light track attribute, the attribute of light material generates the control command to the corresponding source file of light material, and carries out Play Control according to this control command to the source file of video material and acoustic image, audio effect processing control.Similar with track of video, the attribute of concrete control command audio track, the attribute of video material are corresponding.
As shown in Fig. 6 and Figure 12, the panorama multi-channel audio control method based on the control of variable domain acoustic image of the present embodiment can also be selected to increase device and control (corresponding with following apparatus control module), specifically comprises the following steps:
S601: adding set track, (on display interface) adds the one or more device track 5(regions being parallel to described time shaft), each described device track corresponding a controlled device, such as mechanical device.Needed to confirm that controlled device connects with integrated control platform before adding set track.Integrated control platform and controlled device can be connected by TCP, and such as integrated control platform is arranged to TCP server, TCP client is arranged to by each controlled device, are initiatively connected to the TCP server of integrated control platform after the TCP client access network of controlled device.
S602: editing device track attribute, editable device track attribute comprises lock on track, track is quiet.Seemingly, if device track is selected quiet, then the whole attached control sub-track of device track does not perform any operation for device track attribute and audio track Attribute class.
S603: add and control sub-track, add the one or more control sub-tracks corresponding with wherein device track described in, each described control sub-track parallel (and for) described time shaft, the controlled plant corresponding to described device track that each described control sub-track is corresponding with it is corresponding.
S604: add and control material, the control material of respective type is added according to the type controlling sub-track, and generate on added control sub-track and corresponding control sub-material, total duration of this control sub-track length occupied by control material and this control material matches.
The type controlling sub-track comprises TTL and controls sub-track, Control sub-track, network control sub-track, accordingly, the control material that can be added into TTL control sub-track comprises TTL material 511, 512, 513(such as TTL high level controls material, TTL low level control material), the control material that can be added into relay sub-track comprises relay material 521, 522, 523, control material opened by 524(such as relay, relay closes and controls material), the control material package includes network material 501 of network control sub-track can be added into, 502, 503(such as tcp/ip communication controls material, UDP Control on Communication material, 232 Control on Communication materials, 485 protocol communications control material etc.).By adding the sub-material of corresponding control, can send corresponding control command, controlling sub-material is exactly in fact control command.
S605: the sub-material attribute of editor control, attribute comprises start position, final position, total duration.Control the position of sub-material in corresponding control sub-track by adjustment (transverse shifting) and can change start position and final position, but start position and final position relative position on a timeline can not change, namely the length of audio material can not change.The start position controlling material is and starts to send the control command corresponding with this control material to the time shaft moment of corresponding controlled device, and end position is the time shaft moment stopping sending controling instruction.
Further, between control material in same control sub-track, can also incidence relation be set, the control command being positioned at time shaft moment corresponding to start position control material comparatively early corresponding is not run succeeded, so will not send (integrated control platform) or not perform the more late association of time shaft moment corresponding to (controlled device) start position and control control command corresponding to material, the folding of such as curtain, elevating control.
Further, the guard time of certain time length can be set before and after the control material of control sub-track, namely can not add control material at control sub-track in guard time and maybe can not send control command.
S606: preserve data, or according to controlling the attribute of track and control sub-track thereof, the attribute controlling material generates control command, and this control command is sent to corresponding controlled device.
In addition, the present embodiment also provides a kind of performance integrated control system, and as shown in Figure 7, this system comprises integrated control platform 70, and selects to comprise audio server 76, video server 77, lighting control module 78 and device control module 79.Wherein, described integrated control platform 70 comprises Multi-track editing playback module 71, described Multi-track editing playback module 71 can perform one or more controls in the control of above-mentioned performance integrating control control method sound intermediate frequency, video control, signal light control and device control, and concrete performing step does not repeat them here.Described many rails playback editor control module comprises audio frequency control module 72, and selects to comprise video control module 73, lighting control module 74 and device control module 75.
As shown in Figure 8, described audio frequency control mould 72 pieces comprises audio track and adds module 81, audio track attributes edit module 82, audio material interpolation module 83, audio material attributes edit module 84, audio frequency sub-track interpolation module 85, keeps data/output audio control command module 86, the function that these modules realize respectively with abovementioned steps S201 to S206 one_to_one corresponding, do not repeat them here, lower same.
Further, the audio frequency Play Control principle of described performance integrated control system as shown in figure 13, described integrating control also comprises quick playback editor module, physics input module, many rails playback editor module, for real-time edition audio material described in quick playback editor module, and send corresponding control command to source file corresponding to audio server 76 audio plays material, described physics input module corresponds to the physical operations key on integrated control platform 71, carries out real-time tuning control for becoming the source of sound of control platform to outside input set.
Accordingly, audio mixing matrix module, track matrix module, 3x1 output mix module and physics output module is provided with in described audio server, described audio mixing matrix module can receive audio-source file in the described audio server called by control command form from described quick playback editor module, many rails playback editor module and the audio signal exported, and the audio signal that described physics input module exports, similar described track matrix module also can receive the audio frequency input of above-mentioned each road.Described audio mixing matrix exports described output mix module to after being used for carrying out stereo process to the input of each road audio frequency, and described track matrix module exports described output mix module to after being used for carrying out the process of acoustic image track to the input of each road audio frequency.The audio frequency that described output mix module can receive from audio mixing matrix module, track matrix module and physics input module exports, and exports after 3x1 stereo process through each physics output interface of described physics output module.Wherein, the process of acoustic image track refers to and to adjust the level exporting each audio amplifier entity to according to acoustic image track data, makes the acoustic image of audio amplifier physical system run along set path within the time period of preseting length or keep motionless.
In the present embodiment, the source file of audio material is kept on the audio server of integrated control platform outside, many rails playback editor module does not directly call the source file with processing audio material, and just process this property file corresponding to audio-source file, by the property file of editor's adjustment source file, add/editor's acoustic image material, and the attribute of audio track and sub-track thereof realizes the indirect control to audio-source file, control signal/instruction that what output channel that therefore each audio track is corresponding exported is only for audio-source file, and then the various process of audio-source file are performed by the audio server receiving this control command.
As shown in figure 14, described many rails playback editor module receives effective audio material list from audio server 76, not direct processing audio source file, audio-source file is stored in described audio server, after the corresponding control command of reception, call audio-source file again carry out various audio effect processing, such as enter audio mixing matrix module and carry out stereo process, enter track matrix module and carry out track process.Acoustic image material is in fact also control command, both can be kept at integrated control platform 71, also can be uploaded to audio server.
As shown in Figure 9, described video control module 73 comprise track of video add module 91, track of video attributes edit module 92, video material add module 93, video material attributes edit module 94, keep data/output video control command module 95, the function that these modules realize respectively with abovementioned steps S401 to 405 one_to_one corresponding.
Further, the video editing of described performance integrated control system and Play Control principle are as shown in figure 15, described integrated control platform does not directly perform the source file of video material, but sending control command by acquisition video material list and corresponding property file to video server, video server connects and performs broadcasting and effect operation according to control command to the source file of video material again.
As shown in Figure 10, described lighting control module 74 comprise light track add module 110, light track attributes edit module 120, light material add module 130, light material attributes edit module 140, keep data/output signal light control instruction module 150, the function that these modules realize respectively with abovementioned steps S501 to 505 one_to_one corresponding.
Further, the signal light control principle of described performance integrated control system as shown in figure 16, described integrated control platform is also provided with light signal and records module, for recording the light controling signal that lamp control platform exports, and in recording process, timing code is stamped to recorded light controling signal, to control at the enterprising edlin of light track.
As shown in figure 11, described device control module 75 comprises device track and adds module 151, device track attributes edit module 152, controls sub-track and add module 153, control material and add module 154, control material attributes edit module 155, keep data/output signal light control instruction module 156, the function that these modules realize respectively with abovementioned steps S601 to 606 one_to_one corresponding.
Further, as shown in figure 17, all kinds of device control signals each protocol interface on device adapter that described integrated control platform exports exports corresponding controlled plant to the device control principle of described performance integrated control system.
In addition, described integrated control platform can also comprise the acoustic image track data generation module for making (generation) acoustic image track data (i.e. acoustic image material), the acoustic image track data obtained through this module can call for described many rails playback editor Executive Module, thus controls described audio server track matrix module and control acoustic image track.Further, the present embodiment provides a kind of and becomes rail acoustic image method for controlling trajectory, this control method is arranged by main control system (as integrated control platform, the audio server) output level value to each audio amplifier node of entity sound box system, acoustic image to be moved or static by the mode of setting in total duration of setting, as shown in figure 18, this control method comprises:
S101: generate acoustic image track data;
S102: in the total duration corresponding to this acoustic image track data, according to acoustic image track data, adjust the output level of each audio amplifier entity;
S103: in this total duration, carries out superposing by the output level of the incoming level and corresponding audio amplifier entity that input to each audio amplifier physical signal the level obtaining the actual output of each audio amplifier entity.
Acoustic image track data refers within the time period of preseting length (total duration that namely acoustic image is lasting), run or keep motionless to make the acoustic image that in audio amplifier distribution map virtual on integrated control platform, each virtual audio amplifier node output level is formed along the path preset, the time dependent output level data of each audio amplifier node.Namely acoustic image track data contains the output level delta data of whole audio amplifier nodes within this preseting length time period in audio amplifier distribution map.For each audio amplifier node, in this setting-up time section, its output level size changes along with time variations, is also likely zero, negative is even negative infinite, preferential adopt negative infinite.
Each audio amplifier node corresponds to an audio amplifier entity in entity sound box system, and each audio amplifier entity comprises the one or more audio amplifiers being positioned at same position place.Namely each audio amplifier node can corresponding one or more audio amplifier being positioned at same position.Acoustic image path can be reappeared more exactly to make entity sound box system, the position distribution of each sound virtual audio amplifier node in audio amplifier distribution map should distribute corresponding with each audio amplifier provider location of entity sound box system, especially the relative position relation between each audio amplifier node is made, corresponding with the relative position relation between each audio amplifier entity.
The level of the actual output of audio amplifier entity is that the level of input signal superposes gained with the output level of audio amplifier node corresponding with this audio amplifier entity in above-mentioned acoustic image track data.The former is the characteristic of input signal, and the latter can be considered as the characteristic of audio amplifier entity self.At any one time, different input signals just has different incoming levels, and for same audio amplifier entity, only has an output level.Therefore can be understood as, the process of acoustic image track is the output level process to each audio amplifier entity, to form default acoustic image path effect (comprising acoustic image transfixion).
The incoming level of audio amplifier entity and output level superposition can before audio signal actual input audio amplifier entity advanced row relax, also can process again after entering audio amplifier entity, this depends on that the link of whole public address system is formed, and whether audio amplifier entity is built-in with audio-frequency signal processing module, as DSP unit etc.
The type of acoustic image track data comprises: fixed point audio-visual-data, change rail acoustic image track data and variable domain acoustic image track.When on integrated control platform, simulation generates acoustic image track data, conveniently the speed of acoustic image and process are controlled, the line segment connected successively between some acoustic image TRAJECTORY CONTROL points of the embodiment of the present invention by distribution discrete in audio amplifier distribution map represents the path of running of acoustic image, namely the path of running of acoustic image is determined by several acoustic image TRAJECTORY CONTROL points of discrete distribution, and the overall running time of acoustic image.
Fixed point acoustic image, refers within the time period of preseting length, one or more audio amplifier node keeps ground output level selected in audio amplifier distribution map, and not selected audio amplifier node output level numerical value is zero or bears infinite situation.Correspondingly, fixed point audio-visual-data, refer within the time period of preseting length, one or more audio amplifier node keeps ground output level selected in audio amplifier distribution map, and not selected audio amplifier node not output level, output level numerical value be zero or negative infinite time, the time dependent output level data of each audio amplifier node.For selected audio amplifier node, in this setting-up time, its output level is continuous print (also may have change of fluctuating up and down); And for not selected audio amplifier node, in this setting-up time, its output level remains negative infinite.
Become rail acoustic image, refer within the time period of preseting length, in order to make acoustic image run along preset path, each audio amplifier node is according to the situation of certain rule output level.Correspondingly, become rail acoustic image track data, refer within the time period of preseting length, in order to make acoustic image run along preset path, the time dependent output level data of each audio amplifier node.The path of running of acoustic image does not need very accurate, and the time that acoustic image motion (running) continues can not be very long, only needs roughly to build the discernible acoustic image of audience and to run effect.
Variable domain acoustic image, refers within the time period of preseting length, in order to make acoustic image run along predeterminable area, and the situation that the output level of each audio amplifier node changes according to certain rule.Correspondingly, variable domain acoustic image track data refers to the time at preseting length, in order to make acoustic image run along predeterminable area, and the time dependent output level data of each audio amplifier node.
As shown in figure 19, become rail acoustic image track data can obtain by the following method:
S201: arrange audio amplifier node: in audio amplifier distribution map 10, adds or deletes audio amplifier node 11, see Figure 20.
S202: amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level, audio amplifier title etc.Audio amplifier node represents with audio amplifier icon in audio amplifier distribution map, can change its coordinate position by Mobile loudspeaker box icon.Audio amplifier type refers to full-range cabinet or ultralow frequency audio amplifier, and particular type can divide according to actual needs.Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, and each output channel corresponds to an audio amplifier entity in entity sound box system, and each audio amplifier entity comprises the one or more audio amplifiers being positioned at same position place.Namely each audio amplifier node can corresponding one or more audio amplifier being positioned at same position.To run path to reappear in audio amplifier distribution map designed acoustic image, the position distribution of audio amplifier entity should be corresponding with the position distribution of audio amplifier node in audio amplifier distribution map.
S203: divide delta-shaped region: as shown in figure 21, according to the distribution of audio amplifier node, audio amplifier distribution map is divided multiple delta-shaped region, three summits of each delta-shaped region are audio amplifier node; Each delta-shaped region is not overlapping, and does not comprise other audio amplifier nodes in each delta-shaped region, and each described audio amplifier node is corresponding with an output channel (or audio playing apparatus);
Further, can also determine delta-shaped region by arranging auxiliary sound box node to assist, described auxiliary sound box node does not have corresponding output channel, not output level;
S204: setting acoustic image TRAJECTORY CONTROL point and path of running: the time dependent path 12 of running of acoustic image is set in audio amplifier distribution map, and is positioned at this several acoustic image TRAJECTORY CONTROL points 11 of running on path.Can adopt and set acoustic image with the following method and to run path and acoustic image TRAJECTORY CONTROL point:
1, fixed point builds: (coordinate) position determining several acoustic image TRAJECTORY CONTROL points in audio amplifier distribution map successively, these several acoustic image TRAJECTORY CONTROL points are in turn connected to form acoustic image to run path, the moment that first acoustic image TRAJECTORY CONTROL point determined is corresponding is zero, and moment corresponding to follow-up acoustic image TRAJECTORY CONTROL point is for from determining that first acoustic image TRAJECTORY CONTROL point is to determining the time that current acoustic image TRAJECTORY CONTROL point experiences.Such as can click acoustic image TRAJECTORY CONTROL point by clicking sign (as mouse pointer) in audio amplifier distribution map, determine that an acoustic image TRAJECTORY CONTROL point-to-point hits determine that next acoustic image TRAJECTORY CONTROL point institute elapsed time determines the time span between two acoustic image tracing points from click, and finally calculate each moment corresponding to acoustic image tracing point;
2, drag and generate: in audio amplifier distribution map, drag mark (as mouse pointer) move along arbitrary line, curve or broken line thus determine that acoustic image is run path, in the process dragging mark, from initial position, all can generate an acoustic image TRAJECTORY CONTROL point on this runs path at interval of a period of time Ts.In the present embodiment, Ts is 108ms;
S205: editor's acoustic image TRAJECTORY CONTROL point attribute: the attribute of acoustic image TRAJECTORY CONTROL point comprise acoustic image TRAJECTORY CONTROL point coordinates position, the corresponding moment, to the time needed for next acoustic image TRAJECTORY CONTROL point.Can to run in total duration corresponding to path one or more modifies to the moment corresponding to selected acoustic image TRAJECTORY CONTROL point, time needed for this selected acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point and acoustic image.
Suppose that the moment that acoustic image TRAJECTORY CONTROL point i is corresponding is ti, acoustic image from acoustic image TRAJECTORY CONTROL point i run to next tracing point i+1 former required time be ti ', acoustic image total duration corresponding to path of running is t.This means acoustic image from initial position run to the time that acoustic image TRAJECTORY CONTROL point i needs be ti, the acoustic image time of running through needed for whole path is t.
If modify to the moment corresponding to a certain acoustic image TRAJECTORY CONTROL point, then the whole acoustic image TRAJECTORY CONTROL point each self-corresponding moment before this acoustic image TRAJECTORY CONTROL point, and run total duration in path of acoustic image all needs to adjust.If the moment of the former correspondence of acoustic image TRAJECTORY CONTROL point i is ti, moment corresponding after amendment is Ti, the moment of the former correspondence of arbitrary acoustic image TRAJECTORY CONTROL point J before acoustic image TRAJECTORY CONTROL point i is tj, moment corresponding after adjustment is Tj, acoustic image former total duration corresponding to path of running is t, amended total duration is T, so Tj=tj/ti*(Ti-ti), T=t+(Ti-ti).The adjustment mode advantages of simple that the present invention adopts, and amount of calculation is very little.
Be understandable that, after time modification corresponding to arbitrary acoustic image TRAJECTORY CONTROL point, the time increased or reduce can distribute to whole acoustic image TRAJECTORY CONTROL points (i.e. aforementioned manner) before this acoustic image TRAJECTORY CONTROL point, the whole acoustic image TRAJECTORY CONTROL points also can run on path by each acoustic image of duration pro rate in identical duration ratio.After adopting during a kind of mode, suppose that the time that acoustic image TRAJECTORY CONTROL point i prepares to increase is ki, the moment that so acoustic image TRAJECTORY CONTROL point is corresponding will be modified to Ti=(ki*ti/t)+ti, namely time ki is not all respectively dispensing acoustic image TRAJECTORY CONTROL point, and each acoustic image TRAJECTORY CONTROL point all can in its ratio with the total duration in path of running to distribute portion of time.
If adjust the time needed for a certain acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point, run total duration in path of so next moment corresponding to acoustic image TRAJECTORY CONTROL point, and acoustic image all needs to adjust.If the moment of the former correspondence of acoustic image TRAJECTORY CONTROL point i is ti, moment corresponding after amendment is Ti, acoustic image from current acoustic image TRAJECTORY CONTROL point i run be ti ' amendment to next tracing point i+1 former required time after the required time be Ti ', acoustic image former total duration corresponding to path of running is t, amended total duration is T, so Ti+1=Ti+Ti ', T=t+(Ti-ti)+(Ti '-ti ').
Total duration corresponding to path if amendment acoustic image is run, so this acoustic image is run moment corresponding to acoustic image TRAJECTORY CONTROL point of each on path and all adjusting to the time needed for next acoustic image TRAJECTORY CONTROL point.If the moment of the former correspondence of acoustic image TRAJECTORY CONTROL point i is ti, moment corresponding after adjustment is Ti, acoustic image from current acoustic image TRAJECTORY CONTROL point i run be ti ' adjustment to next tracing point i+1 former required time after the required time be Ti ', acoustic image former total duration corresponding to path of running is t, amended total duration is T, so Ti=ti/t* (T-t)+ti, Ti '=ti '/t*(T-t)+ti '.
S206: record becomes rail acoustic image track data: record each audio amplifier node and to run the output level numerical value in each moment in process in acoustic image path of running along setting.
For change rail acoustic image, calculate the output level value of the relevant audio amplifier node for generating acoustic image by following method.As shown in figure 22, suppose acoustic image tracing point i(not necessarily acoustic image TRAJECTORY CONTROL point) be arranged in the delta-shaped region enclosed by three audio amplifier nodes, the moment that this acoustic image tracing point i is corresponding is ti, now three audio amplifier nodes of vertex position will export a certain size level, the output level value of other audio amplifier nodes in audio amplifier distribution map beyond these three audio amplifier nodes is zero or negative infinite, thus ensures that the acoustic image in ti moment in audio amplifier distribution map is positioned at above-mentioned acoustic image tracing point i place.Output level for the audio amplifier node A at arbitrary summit place of this delta-shaped region, this moment ti is dB
a1=10*lg(L
a'/L
a), wherein L
a' for this acoustic image tracing point is to the straight distance of all the other two summit institute structures of this delta-shaped region, L
afor this audio amplifier node A is to the straight distance of all the other two summit institute structures;
Further, each audio amplifier node can also arrange initialization level value.The initialization level supposing above-mentioned audio amplifier node A is dB
a, so at above-mentioned moment ti, the output level dB of audio amplifier node A1
a1'=dB
a+ 10*lg(L
a'/L
a).After all the other audio amplifier Node configuration initialization level, the output level of t by that analogy.
Further, as shown in figure 20, any one delta-shaped region be made up of three audio amplifier nodes (movement locus end) is not fallen into if any part acoustic image tracing point (or acoustic image run path), auxiliary sound box node 13 then can be set to arrange new delta-shaped region, to ensure that whole acoustic image tracing point all falls into corresponding delta-shaped region, described auxiliary sound box node does not have corresponding output channel, not output level, only determines delta-shaped region for auxiliary;
Further, when recording the output level value of each audio amplifier node, can record continuously, also can carry out record according to certain frequency.For the latter, refer to the output level numerical value at interval of certain hour record once each audio amplifier node.In the present embodiment, adopt the frequency of 25 frames/second or 30 frames/second to be recorded in the output level value of each audio amplifier node when acoustic image is run along set path.By the output level data of each audio amplifier node of certain frequency record, can data volume be reduced, accelerate the processing speed when carrying out the process of acoustic image track to input audio signal, ensure that acoustic image is run the real-time of effect.
As shown in 5 figure, variable domain acoustic image track data can obtain by the following method:
S501: arrange audio amplifier node: in audio amplifier distribution map, adds or deletes audio amplifier node.
S502: amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level, audio amplifier title etc.Audio amplifier node represents with audio amplifier icon in audio amplifier distribution map, can change its coordinate position by Mobile loudspeaker box icon.Audio amplifier type refers to full-range cabinet or ultralow frequency audio amplifier, and particular type can divide according to actual needs.Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, and each output channel corresponds to an audio amplifier entity in entity sound box system, and each audio amplifier entity comprises the one or more audio amplifiers being positioned at same position place.Namely each audio amplifier node can corresponding one or more audio amplifier being positioned at same position.To run path to reappear in audio amplifier distribution map designed acoustic image, the position distribution of audio amplifier entity should be corresponding with the position distribution of audio amplifier node in audio amplifier distribution map.
S503: setting acoustic image is run path divide acoustic image region: arrange multiple acoustic image region in audio amplifier distribution map, each acoustic image region comprises several audio amplifier nodes, and arranges the path of running in each acoustic image region of traversal.Be considered as one " acoustic image point " by acoustic image region, acoustic image is run to another region from a region, until run through whole acoustic image region successively.Can in audio amplifier distribution map arbitrary acoustic image region that each complementary overhangs is set, also can in the following manner, quick-setting acoustic image region:
Audio amplifier distribution map arranges straight line acoustic image to run path, and arrange several acoustic image regions along acoustic image path of running, the border in each acoustic image region is approximately perpendicular to the direction of running of described acoustic image.These acoustic image regions can be arranged side by side, and also can arrange at interval, but in order to ensure that raw acoustic image moves the continuity of (running), prioritizing selection is arranged side by side mode.The gross area in these acoustic image regions is less than or equal to the area of whole audio amplifier distribution map.When dividing acoustic image region, wide division can be adopted, also can adopt not wide division.
During concrete operations, by drag sign (as mouse pointer) arrange simultaneously acoustic image run path and divide acoustic image region.Specifically: drag sign and move to final position from a certain start position along certain direction in audio amplifier distribution map, simultaneously divide several acoustic image regions according to this start position to the air line distance equalization in this final position, the border in each acoustic image region is perpendicular to this this start position to the straight line in this final position, and the width in each acoustic image region is impartial.Acoustic image total duration of running moves to from original position the time that middle final position experiences for dragging sign.
Suppose that the air line distance of sign from start position to final position is R, total duration used is t, and the impartial quantity dividing acoustic image region is n, so will automatically generate n the acoustic image region that width is R/n, and each moment corresponding to acoustic image region is t/n.
S504: editor's acoustic image attribute zone time, comprises the moment corresponding to acoustic image region, to run total duration to next acoustic image region required time and acoustic image in current acoustic image region.The editor of acoustic image area attribute is similar with change rail acoustic image tracing point attributes edit.If modify to the moment corresponding to a certain acoustic image region, then the whole acoustic image regions each self-corresponding moment before this territory, sound area, and total duration that acoustic image is run all needs to adjust.If adjust the time needed for a certain acoustic image region to next acoustic image region, so next moment corresponding to acoustic image region, and acoustic image total duration of running all needs to adjust.If amendment acoustic image is run total duration, so this acoustic image is run moment corresponding to acoustic image region of each on path and all adjusting to the time needed for next acoustic image region.
S505: record variable domain acoustic image track data, records each audio amplifier node and runs at acoustic image successively in the process in each acoustic image region along setting path of running, the output level numerical value in each moment.
For variable domain acoustic image, calculate the output level value of the relevant audio amplifier node for generating acoustic image by following method.
As shown in figure 23, suppose that the acoustic image of a certain variable domain track total duration of running is t, be divided into the acoustic image region of 4 equal width altogether, acoustic image runs path from certain acoustic image region 1(acoustic image region i along the acoustic image of straight line) to next acoustic image region 2(acoustic image region i+1) mobile, the acoustic image mid point that path is arranged in the line segment in acoustic image region 1 of running is acoustic image TRAJECTORY CONTROL point 1(acoustic image TRAJECTORY CONTROL point i), the acoustic image mid point that path is arranged in the line segment in acoustic image region 2 of running is acoustic image TRAJECTORY CONTROL point 2(acoustic image TRAJECTORY CONTROL point i+1).Run to the process in next acoustic image region 2 at acoustic image tracing point P from current acoustic image region 1, in acoustic image region 1, the output level of each audio amplifier node is 1dB(territory, territory dB
i), in acoustic image region 2, the output level of each audio amplifier node is 2dB(territory, territory dB
i+1), the audio amplifier node output level beyond these two acoustic image regions is zero or negative infinite.
Territory 1dB value=10log
eη ÷ 2.3025851
Territory 2dB value=10log
eβ ÷ 2.3025851
Wherein, l
12for acoustic image TRAJECTORY CONTROL point 1 is to the distance of acoustic image TRAJECTORY CONTROL point 2, l
1Pfor acoustic image TRAJECTORY CONTROL point 1 is to the distance of acoustic image tracing point P, l
p2for current acoustic image tracing point P is to the distance of acoustic image TRAJECTORY CONTROL point 2.Can find out that each acoustic image tracing point has two acoustic image region output levels from above-mentioned formula, but when acoustic image tracing point is positioned at each acoustic image TRAJECTORY CONTROL point, only has one of them acoustic image region output level, such as, when acoustic image tracing point P moves to acoustic image TRAJECTORY CONTROL point 2, now only have acoustic image region 2 output level, and the output level in acoustic image region 1 is zero.
When recording the output level value of variable domain acoustic image track each audio amplifier node, can record continuously, also can carry out record according to certain frequency.For the latter, refer to the output level numerical value at interval of certain hour record once each audio amplifier node.In the present embodiment, adopt the frequency of 25 frames/second or 30 frames/second to be recorded in the output level value of each audio amplifier node when acoustic image is run along set path.By the output level data of each audio amplifier node of certain frequency record, can data volume be reduced, accelerate the processing speed when carrying out the process of acoustic image track to input audio signal, ensure that acoustic image is run the real-time of effect.
As shown in figure 24, fixed point acoustic image track data can obtain by the following method:
S701: arrange audio amplifier node: in audio amplifier distribution map, adds or deletes audio amplifier node.
S702: amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level, audio amplifier title etc.
S703: setting acoustic image tracing point and total duration, selected one or more audio amplifier node in audio amplifier distribution map, each audio amplifier node of selecting as acoustic image tracing point, and the time that acoustic image tracing point stops at each audio amplifier node is set.
S704: record fixed point acoustic image track data: the output level numerical value in each moment of recording each audio amplifier node in above-mentioned total duration.
In addition, acoustic image track data of the present invention also comprises audio amplifier link data.Audio amplifier link refers to and performs operation associated to audio amplifier node, and when associating the active audio amplifier node output level in audio amplifier node, the passive sound box node of association audio amplifier node is by automatic output level.Audio amplifier link data be to several selected audio amplifier nodes carry out operation associated after, passive sound box node is relative to the output level difference of active audio amplifier node.For being necessary to link the audio amplifier node associated, in spatial distribution, distance can be relatively.
As shown in figure 25, audio amplifier link data obtains by following method:
S801: arrange audio amplifier node: in audio amplifier distribution map, adds or deletes audio amplifier node.
S802: amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level, audio amplifier title etc.
S803: audio amplifier node link relation is set: selected ultralow frequency audio amplifier node is connected to neighbouring multiple full-range cabinet nodes;
S804: record audio amplifier link data: calculate and record the output level DerivedTrim of described ultralow frequency audio amplifier, this output level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10
(Trim-i+LinkTrim-i)/10wherein Trim-i is the output level value of arbitrary described full-range cabinet node i self, what LinkTrim-i was the former setting of full-range cabinet node i described in this and described ultralow frequency audio amplifier links level, DeriveddB is the initialization level value of described ultralow frequency audio amplifier node, and DerivedTrim is the output level value after described ultralow frequency audio amplifier node sets links to described some full-range cabinet nodes.A ultralow frequency audio amplifier node can be set to link to one or more full-range cabinet node, after link, when full-range cabinet node output level, the ultralow frequency audio amplifier node so linked with it will output level automatically, with the sound effect coordinating the construction of full-range cabinet node certain.For ultralow frequency audio amplifier node link to full-range cabinet node, only need consider both distance, source of sound character and required audio etc., can set ultralow frequency audio amplifier node automatically follow this full-range cabinet node play time output level, namely link level.
As shown in figure 26, suppose that the ultralow frequency audio amplifier node 4 in audio amplifier distribution map links to 3 neighbouring full-range cabinet nodes, self output level value of full-range cabinet node 21,22,23 is respectively Trim1, Trim2 and Trim3, and ultralow frequency audio amplifier node 24 was respectively LinkTrim1, LinkTrim2 and LinkTrim3 with the level value that links of each full-range cabinet point 21,22,23 originally.If it is Ratio that level is totally added ratio, ultralow frequency audio amplifier node 4 initializes itself level value is DeriveddB, and last ultralow frequency audio amplifier node 4 output level value is DerivedTrim, then have:
Ratio=10
(Trim1+LinkTrim1)/10+10
(Trim2+LinkTrim2)/10+10
(Trim3+LinkTrim3)/10
DerivedTrim=10*log(Ratio)+DeriveddB
When Ratio is greater than 1, it is 0 that ultralow frequency audio amplifier node 24 is linked to gained output level after these three full-range cabinet nodes, and namely its final output level value is initialization level value.
Claims (9)
1., based on a panorama multi-channel audio control method for variable domain audio-visual effects, it is characterized in that comprising:
Add audio track, display interface add parallel and be aligned in one or more audio tracks of described time shaft, the corresponding output channel of each described audio track;
Editor's audio track attribute;
Add audio material, one or more audio material is added in audio track, and the audio material icon corresponding with this audio material is generated in audio track, this length of audio track occupied by audio material icon and total duration of this audio material match;
Editor's audio material attribute, described audio material attribute comprises start position, final position, time started, end time, total duration, reproduction time length;
Add audio frequency sub-track, add the one or more audio frequency sub-tracks corresponding with wherein audio track described in, each described audio frequency sub-track is parallel to described time shaft, the output channel of the described audio track that described audio frequency sub-track is corresponding with it is corresponding, and the type of described audio frequency sub-track comprises acoustic image sub-track;
Add acoustic image sub-track harmony pixel material, one or more acoustic image material is added in acoustic image sub-track, and the acoustic image material icon corresponding with this acoustic image material is generated in this acoustic image sub-track, this length of acoustic image sub-track occupied by acoustic image material icon and the total duration corresponding to this acoustic image material match;
Editor's acoustic image sub-track attribute;
Editor's acoustic image material attribute, described acoustic image material attribute is bag start position, final position, time started, end time, total duration, reproduction time length also;
Described acoustic image material is acoustic image track data, and described acoustic image track data comprises variable domain acoustic image track data and fixed point acoustic image track data, wherein:
Described variable domain acoustic image track data obtains by the following method:
Audio amplifier node is set: in audio amplifier distribution map, adds or delete audio amplifier node;
Amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level;
Setting acoustic image is run path divide acoustic image region: in audio amplifier distribution map, arrange multiple acoustic image region, each acoustic image region comprises several audio amplifier nodes, and arranges the path of running in each acoustic image region of traversal;
Editor's acoustic image attribute zone time, comprises the moment corresponding to acoustic image region, to run total duration to next acoustic image region required time and acoustic image in current acoustic image region;
Record variable domain acoustic image track data, records each audio amplifier node and runs at acoustic image successively in the process in each acoustic image region along setting path of running, the output level numerical value in each moment.
Described acoustic image track data comprises fixed point acoustic image track data, and described fixed point acoustic image track data obtains by the following method:
Audio amplifier node is set: in audio amplifier distribution map, adds or delete audio amplifier node;
Amendment audio amplifier nodal community: the attribute of audio amplifier node comprises audio amplifier coordinate, audio amplifier type, corresponding output channel, initialization level;
Setting acoustic image tracing point and total duration, selected one or more audio amplifier node in audio amplifier distribution map, each audio amplifier node of selecting as acoustic image tracing point, and the time that acoustic image tracing point stops at each audio amplifier node is set;
Record fixed point acoustic image track data: the output level numerical value in each moment of recording each audio amplifier node in above-mentioned total duration.
2. the sound effect control method based on variable domain acoustic image according to claim 1, it is characterized in that, when generating described variable domain acoustic image track data, each described audio amplifier node at acoustic image along setting path of the running output level numerical computation method in each moment in the process in each acoustic image region of running successively is: set the acoustic image of a certain variable domain track to run total duration as t, be divided into the acoustic image region of n equal width altogether, acoustic image moves to next acoustic image region i+1 along the acoustic image of straight line path of running from certain acoustic image region i, the acoustic image mid point that path is arranged in the line segment in acoustic image region 1 of running is acoustic image TRAJECTORY CONTROL point i, the acoustic image mid point that path is arranged in the line segment of acoustic image region i+1 of running is acoustic image TRAJECTORY CONTROL point i+1, current time acoustic image tracing point P runs to the process of next acoustic image region i+1 from current acoustic image region i, in the i of acoustic image region, the output level of each audio amplifier node is territory dBi, in the i+1 of acoustic image region, the output level of each audio amplifier node is territory dBi+1, audio amplifier node output level beyond these two acoustic image regions is zero or negative infinite, and:
Territory 1dB value=10log
eη ÷ 2.3025851
Territory 2dB value=10log
eβ ÷ 2.3025851
Wherein, l12 is the distance of current time acoustic image TRAJECTORY CONTROL point P to acoustic image TRAJECTORY CONTROL point i, and l1P is the distance of acoustic image TRAJECTORY CONTROL point i to acoustic image tracing point P, and lp2 is the distance of acoustic image tracing point P to acoustic image TRAJECTORY CONTROL point i+1.
3. the sound effect control method based on variable domain acoustic image according to claim 1, it is characterized in that, when generating described variable domain acoustic image track data, when recording the output level value of each audio amplifier node of variable domain acoustic image track, can record continuously, or carry out record according to certain frequency.
4. the sound effect control method based on variable domain acoustic image according to claim 3, is characterized in that, when generating described variable domain acoustic image track data, described frequency is 25 frames/second or 30 frames/second.
5. the sound effect control method based on variable domain acoustic image according to claim 1, it is characterized in that, when generating described variable domain acoustic image track data, setting acoustic image is run path and divide the mode in acoustic image region and be: drag sign and move to final position from a certain start position along certain direction in audio amplifier distribution map, simultaneously divide several acoustic image regions according to this start position to the air line distance equalization in this final position, the border in each acoustic image region perpendicular to this this start position to the straight line in this final position, and the width in each acoustic image region is impartial, acoustic image total duration of running moves to from original position the time that middle final position experiences for dragging sign.
6. the sound effect control method based on variable domain acoustic image according to claim 1, is characterized in that, when generating described variable domain acoustic image track data, in described audio amplifier distribution map, and each described acoustic image region non-overlapping copies.
7. the sound effect control method based on variable domain acoustic image according to claim 1, it is characterized in that, when generating described variable domain acoustic image track data, when editing corresponding to acoustic image region described in certain, if the moment of the former correspondence of certain acoustic image region i is ti, moment corresponding after adjustment is Ti, acoustic image from current acoustic image region i run be ti ' adjustment to next acoustic image region i+1 former required time after the required time be Ti ', acoustic image former total duration corresponding to path of running is t, amended total duration is T, so Ti=ti/t* (T-t)+ti, Ti '=ti '/t*(T-t)+ti '.
8. the panorama multi-channel audio control method based on variable domain audio-visual effects according to claim 1, it is characterized in that, described method also comprises:
Add audio sub-track;
The attribute of editor's audio sub-track, the attribute of described audio sub-track comprises audio effect processing parameter, can be adjusted the audio of the corresponding output channel of audio track belonging to this audio sub-track by the sound effect parameters of amendment audio sub-track.
9. the panorama multi-channel audio control method based on variable domain audio-visual effects according to claim 8, is characterized in that:
The type of described audio sub-track comprises volume and gain sub-track, EQ sub-track, each audio track can arrange a volume and gain sub-track, and one or more EQ sub-track, described volume and gain sub-track are used for adjusting the signal level size of output channel corresponding to affiliated audio track, and described EQ sub-track is used for carrying out EQ audio effect processing to the signal of the output of output channel corresponding to affiliated audio track.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310754855.9A CN104754243B (en) | 2013-12-31 | 2013-12-31 | Panorama multi-channel audio control method based on the control of variable domain acoustic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310754855.9A CN104754243B (en) | 2013-12-31 | 2013-12-31 | Panorama multi-channel audio control method based on the control of variable domain acoustic image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104754243A true CN104754243A (en) | 2015-07-01 |
CN104754243B CN104754243B (en) | 2018-03-09 |
Family
ID=53593284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310754855.9A Active CN104754243B (en) | 2013-12-31 | 2013-12-31 | Panorama multi-channel audio control method based on the control of variable domain acoustic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104754243B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106937023A (en) * | 2015-12-31 | 2017-07-07 | 上海励丰创意展示有限公司 | Towards video display, the multi-specialized collaborative editing of stage and control method |
CN106937021A (en) * | 2015-12-31 | 2017-07-07 | 上海励丰创意展示有限公司 | Performance integrated control method based on many rail playback technologies of time shaft |
CN106937022A (en) * | 2015-12-31 | 2017-07-07 | 上海励丰创意展示有限公司 | Audio, video, light, mechanical multi-specialized collaborative editing and control method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1561144A (en) * | 2004-03-12 | 2005-01-05 | 陈健俊 | 3D8-X stero amplifying system |
CN101916095A (en) * | 2010-07-27 | 2010-12-15 | 北京水晶石数字科技有限公司 | Rehearsal performance control method |
WO2013093175A1 (en) * | 2011-12-22 | 2013-06-27 | Nokia Corporation | A method, an apparatus and a computer program for determination of an audio track |
CN103338420A (en) * | 2013-05-29 | 2013-10-02 | 陈健俊 | Control method of panoramic space stereo sound |
CN203241818U (en) * | 2013-05-29 | 2013-10-16 | 黄博卿 | Stage light and audio integration system |
-
2013
- 2013-12-31 CN CN201310754855.9A patent/CN104754243B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1561144A (en) * | 2004-03-12 | 2005-01-05 | 陈健俊 | 3D8-X stero amplifying system |
CN101916095A (en) * | 2010-07-27 | 2010-12-15 | 北京水晶石数字科技有限公司 | Rehearsal performance control method |
WO2013093175A1 (en) * | 2011-12-22 | 2013-06-27 | Nokia Corporation | A method, an apparatus and a computer program for determination of an audio track |
CN103338420A (en) * | 2013-05-29 | 2013-10-02 | 陈健俊 | Control method of panoramic space stereo sound |
CN203241818U (en) * | 2013-05-29 | 2013-10-16 | 黄博卿 | Stage light and audio integration system |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106937023A (en) * | 2015-12-31 | 2017-07-07 | 上海励丰创意展示有限公司 | Towards video display, the multi-specialized collaborative editing of stage and control method |
CN106937021A (en) * | 2015-12-31 | 2017-07-07 | 上海励丰创意展示有限公司 | Performance integrated control method based on many rail playback technologies of time shaft |
CN106937022A (en) * | 2015-12-31 | 2017-07-07 | 上海励丰创意展示有限公司 | Audio, video, light, mechanical multi-specialized collaborative editing and control method |
CN106937021B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | performance integrated control method based on time axis multi-track playback technology |
CN106937022B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | multi-professional collaborative editing and control method for audio, video, light and machinery |
CN106937023B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | multi-professional collaborative editing and control method for film, television and stage |
Also Published As
Publication number | Publication date |
---|---|
CN104754243B (en) | 2018-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104754178A (en) | Voice frequency control method | |
US9142259B2 (en) | Editing device, editing method, and program | |
CN106937022B (en) | multi-professional collaborative editing and control method for audio, video, light and machinery | |
US10541003B2 (en) | Performance content synchronization based on audio | |
CN104754186A (en) | Device control method | |
US9420394B2 (en) | Panning presets | |
US6674955B2 (en) | Editing device and editing method | |
CN104750058A (en) | Panoramic multichannel audio frequency control method | |
CN104750059A (en) | Light control method | |
CN106937021B (en) | performance integrated control method based on time axis multi-track playback technology | |
CN104754243A (en) | Panoramic multichannel audio frequency control method based on variable domain acoustic image control | |
CN104754242A (en) | Variable rail sound image processing-based panoramic multi-channel audio control method | |
CN104754244A (en) | Full-scene multi-channel audio control method based on variable domain acoustic image performing effect | |
CN104750051A (en) | Variable rail sound image control-based panoramic multi-channel audio control method | |
CN101164648A (en) | Robot theater | |
CN106937023B (en) | multi-professional collaborative editing and control method for film, television and stage | |
CN104751869B9 (en) | Panoramic multi-channel audio control method based on orbital transfer sound image | |
CN104754241A (en) | Panoramic multichannel audio frequency control method based on variable domain acoustic images | |
CN104750055A (en) | Variable rail sound image effect-based panoramic multi-channel audio control method | |
CN106937204B (en) | Panorama multichannel sound effect method for controlling trajectory | |
CN106937205B (en) | Complicated sound effect method for controlling trajectory towards video display, stage | |
CN104754451B (en) | Pinpoint acoustic image method for controlling trajectory | |
CN104754447B (en) | Based on the link sound effect control method for becoming rail acoustic image | |
CN104754449B (en) | Sound effect control method based on variable domain acoustic image | |
CN104754444A (en) | Variable range acoustic image trajectory control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |