CN104754243B - Panorama multi-channel audio control method based on the control of variable domain acoustic image - Google Patents
Panorama multi-channel audio control method based on the control of variable domain acoustic image Download PDFInfo
- Publication number
- CN104754243B CN104754243B CN201310754855.9A CN201310754855A CN104754243B CN 104754243 B CN104754243 B CN 104754243B CN 201310754855 A CN201310754855 A CN 201310754855A CN 104754243 B CN104754243 B CN 104754243B
- Authority
- CN
- China
- Prior art keywords
- acoustic image
- audio
- track
- audio amplifier
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Signal Processing For Digital Recording And Reproducing (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
The present invention relates to device for performing arts control technology, specifically a kind of panorama multi-channel audio control method based on the control of variable domain acoustic image.This method includes:Time shaft is shown on the display interface of integrated control platform;The track for being controlled to corresponding performing device is added and/or deletes, the track includes light track;Editing rail attribute;Add material;Edit material attribute;Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.The present invention can solve the editors of current items on the program and Synchronization Control and acoustic image and run the technical barrier of effect control.
Description
Technical field
The present invention relates to device for performing arts control technology, specifically a kind of panorama multi-channel audio based on the control of variable domain acoustic image
Control method.
Background technology
In items on the program layout process, a comparison distinct issues are that each specialty (refers to audio, video, light, machine
Tool etc.) between coordination and Synchronization Control.In large-scale performance, each specialty is relatively independent, it is necessary to which one huger
Troop could ensure the smooth layout and performance of performance.And during each professional program of layout, the most of the time is all spent
Coordination between specialty and it is synchronous above, and compared with the time being really absorbed in program in itself may be much more.
Because each specialty is relatively independent, control mode differs greatly.To carry out live audio-visual synchronization editor, depending on
Frequency is controlled by lamp control platform, and audio plays back editor control by more rails, and audio is easy to navigate to the arbitrary time and started back
Put, but frame number (can only can be adjusted to correspondence position manually by operating personnel, but can not followed the time by video from the beginning
Code starts), this is short of enough flexibilities for live performance control.
In addition, after the audio amplifier position of the Professional sound box system of existing video display, stage is fixed, pass through stage both sides
Left and right passage master amplifies or acoustic image is substantially set in the middle position of stage by main amplify in the road of left, center, right three, although performance field
In addition to the master on stage amplifies, large number of audio amplifier is also provided with each position, but whole field is performed, audio amplifier
The acoustic image of system almost can seldom change.
Therefore, solve the editor of current items on the program and Synchronization Control and the run flexible control of effect of acoustic image is all this
Technical field key technical problem urgently to be resolved hurrily.
The content of the invention
Present invention solves the technical problem that be to provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and
The panorama multi-channel audio controlling party based on the control of variable domain acoustic image flexibly quickly set can be carried out to the acoustic image of sound reinforcement system
Method.
In order to solve the above technical problems, the technical solution adopted by the present invention is:A kind of panorama based on the control of variable domain acoustic image
Multi-channel audio control method, including:
Audio track is added, one or more audio tracks that are parallel and being aligned in the time shaft are added on display interface
Road, each corresponding output channel of the audio track;
Edit audio track attribute;
Add audio material, add one or more audio materials in audio track, and in audio track generation with
Audio material icon corresponding to the audio material, length and the audio material of the audio track occupied by the audio material icon
Total duration match;
Audio material attribute is edited, the audio material attribute includes start position, final position, time started, end
Time, total duration, reproduction time length;
Add audio sub-track, addition one or more audio sub-tracks corresponding with wherein one audio track, respectively
The audio sub-track is parallel to the time shaft, the output channel of the corresponding audio track of the audio sub-track
Corresponding, the type of the audio sub-track includes acoustic image sub-track;
Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track, and at this
Acoustic image material icon corresponding with the acoustic image material, the sub- rail of acoustic image occupied by the acoustic image material icon are generated in acoustic image sub-track
The length in road matches with the total duration corresponding to the acoustic image material;
Edit acoustic image sub-track attribute;
Edit acoustic image material attribute, acoustic image material attribute also bag start position, final position, time started, the end
Time, total duration, reproduction time length;
The acoustic image material is acoustic image track data, and the acoustic image track data includes variable domain acoustic image track data and audio amplifier
Data are linked, wherein:
The variable domain acoustic image track data obtains by the following method:
Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node;
Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to output channel, be first
Beginningization level;
Setting acoustic image, which is run, path and divides acoustic image region:Multiple acoustic image regions are set in audio amplifier distribution map, each
Acoustic image region includes several audio amplifier nodes, and sets the path of running for traveling through each acoustic image region;
Edit acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region to next acoustic image
The time required to region and acoustic image is run total duration;
Variable domain acoustic image track data is recorded, each audio amplifier node is recorded and is run successively by each in acoustic image along path of running is set
During individual acoustic image region, the output level numerical value at each moment.
The audio amplifier link data obtain as follows:
Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node;
Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to output channel, be first
Beginningization level;
Audio amplifier node link relation is set:Selected ultralow frequency audio amplifier node is connected to neighbouring multiple full-range cabinet sections
Point;
Record audio amplifier link data:Calculate and record the output level DerivedTrim of the ultralow frequency audio amplifier, the output
(Trim-i+LinkTrim-i)/10 of level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10, its
Middle Trim-i is that the output level value of any full-range cabinet node i itself is, LinkTrim-i is the full-range cabinet
The former setting of node i and the level that links of ultralow frequency audio amplifier, DeriveddB are the initialization level of the ultralow frequency audio amplifier node
Value, DerivedTrim are the output level that the ultralow frequency audio amplifier node sets are linked to after some full-range cabinet nodes
Value.
Compared with prior art, have the beneficial effect that:Playback master control embodies the theory of " performance integrated management ".From technology
Angle for, the lotus roots of these units closes that property is very low, and they can be with operating alone without influencing each other, uniquely than more prominent
Contact be " time ", i.e., when what is broadcasting.For the angle used from user, the relation of " time " is
Their most concerned things.Checked and managed if the state of these units can be concentrated in together, user is omitted from
Many unnecessary troubles.Such as coordinate the stationary problem between unit, each professional mutual reference in editing saving
With contrasting amendment etc..
Brief description of the drawings
Fig. 1 is the panorama multi-channel audio control method schematic diagram based on the control of variable domain acoustic image of embodiment.
Fig. 2 is the audio frequency control part of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment
Method schematic diagram.
Fig. 3 is the behaviour of the audio sub-track of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment
Make method schematic diagram.
Fig. 4 is the video control portions of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment
Method schematic diagram.
Fig. 5 is the signal light control part of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment
Method schematic diagram.
Fig. 6 is the apparatus control portion point of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment
Method schematic diagram.
Fig. 7 is the principle schematic of the performance integrated control system of embodiment.
Fig. 8 is the principle schematic of the audio frequency control module of the performance integrated control system of embodiment.
Fig. 9 is the principle schematic of the video control module of the performance integrated control system of embodiment.
Figure 10 is the principle schematic of the lighting control module of the performance integrated control system of embodiment.
Figure 11 is the principle schematic of the device control module of the performance integrated control system of embodiment.
Figure 12 is that more rails of the panorama multi-channel audio control method based on the control of variable domain acoustic image of embodiment play back editor
Module interfaces schematic diagram.
Figure 13 is the principle schematic of the audio frequency control part of the performance integrated control system of embodiment.
Figure 14 is the principle schematic of the track matrix module of the performance integrated control system of embodiment.
Figure 15 is the principle schematic of the video control portions of the performance integrated control system of embodiment.
Figure 16 is the principle schematic of the signal light control part of the performance integrated control system of embodiment.
Figure 17 is the principle schematic of the apparatus control portion point of the performance integrated control system of embodiment.
Figure 18 is the step schematic diagram of the change rail acoustic image method for controlling trajectory of embodiment.
Figure 19 is the change rail acoustic image track data generation method step schematic diagram of embodiment.
Figure 20 is the audio amplifier distribution map of embodiment and becomes rail acoustic image track schematic diagram.
Figure 21 is the triangle audio amplifier node schematic diagram of embodiment.
Figure 22 is the variable domain acoustic image track data generation method step schematic diagram of embodiment.
Figure 23 is the audio amplifier distribution map and variable domain acoustic image track schematic diagram of embodiment.
Figure 24 is the fixed point acoustic image track data generation method step schematic diagram of embodiment.
Figure 25 is the audio amplifier link data creation method step schematic diagram of embodiment.
Figure 26 is the audio amplifier link schematic diagram of embodiment.
Embodiment
The acoustic image TRAJECTORY CONTROL all types of to the present invention is further described below in conjunction with the accompanying drawings.
The present embodiment provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and can be to sound reinforcement system
Acoustic image carry out flexibly quickly set based on variable domain acoustic image control panorama multi-channel audio control method.This method passes through collection
More rails into control platform play back editor module, realize the concentration layout and control to multiple professional materials.As shown in figure 1, the base
Comprise the following steps in the panorama multi-channel audio control method of variable domain acoustic image control:
S101:Time shaft is shown on the display interface of integrated control platform;
S102:Add and/or delete the track for being controlled to corresponding performing device;
S103:Editing rail attribute;
S104:Add material;
S105:Edit material attribute;
S106:Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.
As shown in Fig. 2 and Figure 12, it should be included based on the panorama multi-channel audio control method of variable domain acoustic image control for more
Rail audio playback control (corresponding with following audio frequency control modules), specifically includes following steps:
S201:Audio track is added, one or more that is parallel and being aligned in the time shaft is added on display interface
Audio track (region) 1,2, each corresponding output channel of the audio track.
S202:Audio track attribute is edited, it is Jing Yin that editable audio track attribute includes lock on track, track.Track
Whether Jing Yin attribute can control audio material on this track and all sub-tracks Jing Yin, be the master control of audio track.Track locks
Determine attribute to can control on track except Jing Yin and in addition to sub-track is hidden in addition etc. outside individual attribute, in other attributes and audio track
Material position and material attribute can not be changed.
S203:Add audio material, added in audio track 1,2 one or more audio materials 111,112,113,
211st, 212,213,214, and audio material corresponding with the audio material is generated in audio track, occupied by the audio material
The length of audio track match with the total duration of the audio material.Before audio material is added, first from audio server
Audio material list is obtained, then selecting audio material addition from the audio material list again enters audio track.Work as audio
After material is added to audio track, the audio attribute file corresponding with the audio material will be generated, integrated control platform passes through editor
Audio attribute file is sent to the instruction of audio server to control, rather than directly invokes or edit source corresponding to audio material
File, it is ensured that the security of source file and the stability of integrated control platform.
S204:Edit audio material attribute, the audio material attribute include start position, final position, the time started,
End time, total duration, reproduction time length.Wherein, the start position is the audio material start position (along Vertical Square
To) corresponding to the time shaft moment, the final position be the audio material final position (vertically) corresponding to when
The countershaft moment, the time started be the audio material on a timeline actually commence play out the moment, the end time is
The physical end play position of the audio material on a timeline.In general, the time started can be delayed in start position, terminate
Time can shift to an earlier date in final position.Total duration refers to the script time span of audio material, start position to terminal position when
Between difference be audio material total duration, reproduction time length refers to the reproduction time length of the audio material on a timeline,
The time difference of time started and end time are the reproduction time length of the audio material.By adjusting time started and end
Time can realize the shearing manipulation to acoustic image material, i.e., only play the part that user wishes to play.
It can change start position and terminal position by adjusting position of (transverse shifting) audio material in audio track
Put, but relative position of the start position with final position on a timeline will not change, i.e., and the length of audio material will not change.
Between at the beginning of by adjusting audio material and the end time can change the actual play time of audio material on a timeline
And its length.Multiple audio materials can be placed in one audio track, are represented within the period represented by time shaft, can be with
Multiple audio materials are played successively (through corresponding output channel).It should be noted that the audio material in any audio track
Position (time location) can freely adjust, but should not be overlapping between each audio material.
Further, because integrated control platform is simply controlled to property file corresponding to audio material, therefore integrated control
Platform can also carry out cutting operation and concatenation to audio material.Operation is cut to refer to an audio element on audio track
Material is divided into multiple audio materials, while each audio material after segmentation has each self-corresponding property file, now source file
Still intact, integrated control platform sends control command according to these new property files and calls source file to be played accordingly successively
Operated with audio.Similar, concatenation refers to two audio materials being merged into an audio material, its each self-corresponding category
Property Piece file mergence be a property file, by a property file send control audio server call two audio source documents
Part.
Further, multigroup application entity behaviour corresponding with each audio track respectively can also be set on integrated control platform
Make key, to manually adjust the attribute of audio material by physical operation key.Such as increase to the audio material position in audio track
Put the material reproduction time adjustment knob of (time shaft position) front and rear adjustment.
S205:Add audio sub-track 12,13,14,15,21,22, addition and wherein one audio track corresponding one
Individual or multiple audio sub-tracks, each audio sub-track are corresponding parallel to the time shaft, the audio sub-track
The output channel of the audio track is corresponding.
Each audio track can have an attached audio sub-track, the type of audio sub-track include acoustic image sub-track and
Audio sub-track.Wherein, the acoustic image sub-track is used to carry out acoustic image to the part or all of audio material of affiliated audio track
Trajectory processing, the audio sub-track are used to carry out audio effect processing to the part or all of audio material of affiliated audio track.
In this step, it can further perform the step of:
S301:Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track
121st, 122, and acoustic image material corresponding with the acoustic image material, the sound occupied by the acoustic image material are generated in the acoustic image sub-track
Total duration as corresponding to the length of sub-track with the acoustic image material matches.
S302:Acoustic image sub-track attribute is edited, similar with the audio track, editable acoustic image sub-track attribute includes
Lock on track, track are Jing Yin.
S303:Acoustic image material attribute is edited, similar with the audio material, the acoustic image material attribute also includes starting point position
Put, final position, the time started, the end time, total duration, reproduction time length.
By the acoustic image material on acoustic image sub-track, can between the acoustic image material time started and end time when
Between in section, acoustic image trajectory processing is carried out to the signal that output channel corresponding to the affiliated audio track of acoustic image sub-track exports.Cause
This adds different types of acoustic image material on acoustic image sub-track, can carry out inhomogeneity to the signal that corresponding output channel exports
The acoustic image trajectory processing of type;And by adjust the start position of each acoustic image material, final position, the time started and at the end of
Between, time and acoustic image path effect duration that acoustic image trajectory processing starts can be adjusted.
The difference of acoustic image material and audio material is that what audio material represented is voice data.Acoustic image track data is
Within the period of setting length, in order that the acoustic image edge that each virtual audio amplifier node output level is formed in audio amplifier distribution map
Path set in advance is run or remained stationary as, the output level data that each audio amplifier node changes over time.That is acoustic image track number
According to containing output level delta data of whole audio amplifier nodes in the setting length of time section in audio amplifier distribution map.Acoustic image
The type of track data includes fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image track data, acoustic image rail
The type of mark data determines the type of acoustic image material, and the acoustic image motion total duration corresponding to acoustic image track data determines acoustic image
The total duration of time difference between material start position and final position, i.e. acoustic image material.Acoustic image trajectory processing refers to according to sound
As the size of track data pair each audio amplifier entity reality output level corresponding with each audio amplifier node is adjusted, make audio amplifier
The acoustic image of physical system is run or remained stationary as in the period interior edge setting path of setting length.
S304:Audio sub-track is added, the type of the audio sub-track includes volume and gain sub-track 13,22, EQ
A volume and gain sub-track, and one or more EQ sub-tracks can be set in sub-track 14,15,21, each audio track.
Wherein, the volume and gain sub-track are used to adjust the signal level size of output channel corresponding to affiliated audio track
Whole, the EQ sub-tracks are used to carry out EQ audio effect processings to the signal of the output of output channel corresponding to affiliated audio track.
S305:Edit the attribute of audio sub-track, the attribute of the audio sub-track include lock on track, track it is Jing Yin and
Outside track identities, in addition to audio effect processing parameter corresponding with audio sub-track type.For example, volume and gain sub-track include
Audio effect processing parameter be output level size adjustment parameter, the audio effect processing parameter that EQ sub-tracks include is EQ processing parameters.
Sound effect parameters by changing audio sub-track can adjust the sound that the affiliated audio track of audio sub-track corresponds to output channel
Effect.
S206:Data are preserved, or according to audio track and its attribute of sub-track, the life of audio material harmony pixel material attribute
Paired audio material corresponds to the control instruction of source file, and plays out control to the source file of audio material according to the control instruction
System and acoustic image, audio effect processing control.
Control instruction includes deciding whether to call the audio source file of (broadcasting) audio material, at the beginning of source file plays
Between and the end time (to be defined at the time of time shaft), the acoustic image and audio effect processing of source file, specific control instruction and each sound
The attribute of frequency track and its attached sub-track, audio material, the attribute of acoustic image material are corresponding.That is audio track is not straight
The source file for calling and handling audio material is connect, and simply handles the property file corresponding to the audio source file, passes through editor
The property file, addition/editor's acoustic image material and the attribute of audio track and its sub-track for adjusting source file are realized to audio
The indirect control of source file.
Such as will enter playlist added to the audio material of audio track, and when the audio track starts broadcasting, the sound
Frequency material will be played;By editing audio track attribute, the Jing Yin attribute of audio track can be controlled, the sound can be controlled
Whether frequency track and its attached sub-track Jing Yin (effective), and attribute is locked by editing audio, can control on track except Jing Yin and
Addition is hidden outside the individual attributes such as sub-track is outer, and material position and material attribute in other attributes and audio track can not repair
Change (lock-out state).More detailed description refers to narration above.
As shown in Fig. 4 and Figure 12, the panorama multi-channel audio control method based on the control of variable domain acoustic image of the present embodiment is also
It can select to increase video playback control (corresponding with following video control modules), specifically include following steps:
S401:Track of video is added, (on display interface) addition track of video 4 that is parallel and being aligned in the time shaft
(region), the corresponding controlled plant of the track of video, video server is used in of the invention.
S402:Track of video attribute is edited, it is Jing Yin that editable track of video attribute includes lock on track, track.Video
Track attribute is similar with audio track attribute.
S403:Video material is added, adds one or more video materials 41,42,43,44 in track of video, and
Video material corresponding with the video material is generated in track of video, the length of the track of video occupied by the video material is with being somebody's turn to do
The total duration of video material matches.Before video material is added, video material list first is obtained from video server, then
Video material addition is selected from the video material list again and enters track of video.After video material is added to track of video,
The video attribute file corresponding with the video material will be generated, integrated control platform is sent by editing video attribute file to control
To the instruction of video server, rather than directly invoke or edit source file corresponding to video material, it is ensured that the safety of source file
Property and integrated control platform stability.
S404:Edit video material attribute, the video material attribute include start position, final position, the time started,
End time, total duration, reproduction time length.Video material attribute is similar with audio material attribute, while audio material also may be used
To carry out transverse shifting, cutting and concatenation, or increase adjustment is corresponding with track of video one group on integrated control platform
Physical operation key, to manually adjust the attribute of video material by physical operation key.
S405:Data are preserved, or source document is corresponded to video material according to track of video attribute, the attribute generation of video material
The control instruction of part, and control and acoustic image, audio effect processing control are played out to the source file of video material according to the control instruction
System.Similar with track of video, the attribute of specific control instruction audio track, the attribute of video material are corresponding.
As depicted in figure 5 and figure 12, the present embodiment based on variable domain acoustic image control panorama multi-channel audio control method also
It can select to increase signal light control (corresponding with following lighting control modules), specifically include following steps:
S501:Light track is added, (on display interface) addition light track 3 that is parallel and being aligned in the time shaft
(region), the corresponding controlled plant of the light track, light network signal adapter (such as Artnet nets are used in of the invention
Card).
S502:Light track attribute is edited, it is Jing Yin that editable light track attribute includes lock on track, track.Light
Track attribute is similar with audio track attribute.
S503:Light material is added, one or more light materials 31,32,33 are added in light track, and in light
Light material corresponding with the light material, length and the light of the light track occupied by the light material are generated in track
The total duration of material matches.Similar with audio material, video material, light track does not load light material, and is
Generation property file corresponding with the light material source file, sends control instruction to control light material source by property file
The output of file.
Light material is the light network control data of certain time length, such as Artnet data, Artnet data envelope
Equipped with DMX data.Light material can generate in the following manner:After the good light program of conventional lights control platform layout, integrated control platform leads to
The light network interface that its light network interface is connected on conventional lights control platform is crossed, records the signal light control letter of lamp control platform output
Number, while integrated control platform needs to stamp timing code to the light controling signal recorded in recording process, so as in light rail
The enterprising edlin control in road.
S504:Edit light material attribute, the light material attribute include start position, final position, the time started,
End time, total duration, reproduction time length.Light material attribute is similar with audio material attribute, while audio material also may be used
To carry out transverse shifting, cutting and concatenation, or increase adjustment is corresponding with light track one group on integrated control platform
Physical operation key, to manually adjust the attribute of light material by physical operation key.
S505:Data are preserved, or source document is corresponded to light material according to light track attribute, the attribute generation of light material
The control instruction of part, and control and acoustic image, audio effect processing control are played out to the source file of video material according to the control instruction
System.Similar with track of video, the attribute of specific control instruction audio track, the attribute of video material are corresponding.
As shown in Fig. 6 and Figure 12, the panorama multi-channel audio control method based on the control of variable domain acoustic image of the present embodiment is also
It can select to increase device control (corresponding with following apparatus control module), specifically include following steps:
S601:Adding set track, one or more devices of (on the display interface) addition parallel to the time shaft
Track 5 (region), the corresponding controlled device of each described device track, such as mechanical device.Needed before adding set track
Confirm controlled device and integrated control platform establishes connection.Integrated control platform and controlled device can be established by TCP to be connected,
Such as integrated control platform is arranged to TCP server, each controlled device is arranged to TCP Client, the TCP client termination of controlled device
The integrated TCP server for controlling platform is actively connected to after entering network.
S602:Editing device track attribute, it is Jing Yin that editable device track attribute includes lock on track, track.Device
Track attribute is similar with audio track attribute, if device track selects Jing Yin, the attached control sub-track of whole of device track
Any operation is not performed.
S603:Addition control sub-track, add one or more sub- rails of control corresponding with a wherein described device track
Road, each control sub-track parallel (and for) time shaft, the corresponding described device of each control sub-track
Controlled plant corresponding to track is corresponding.
S604:Addition control material, the control material of respective type is added according to the type of control sub-track, and added
Generated on the control sub-track added and control sub- material accordingly, the control sub-track length occupied by the control material and the control
The total duration of material matches.
Controlling the type of sub-track includes TTL controls sub-track, Control sub-track, network control sub-track, phase
Answer, may be added to that the control material of TTL control sub-tracks includes (such as TTL high level control element of TTL materials 511,512,513
Material, TTL low level controls material), may be added to that relay sub-track control material include relay material 521,522,
523rd, 524 (such as relay opens control material, relay closes control material), may be added to that the control material of network control sub-track
Including network materials 501,502,503 (such as TCP/IP communication control material, UDP Control on Communication material, 232 Control on Communication materials,
485 protocol communications control material etc.).Sub- material is controlled by addition accordingly, corresponding control instruction can be sent, controls son element
Material is substantially exactly control instruction.
S605:The sub- material attribute of editor control, attribute include start position, final position, total duration.It is (horizontal by adjustment
To movement) control position of the sub- material in accordingly control sub-track to change start position and final position, but starting point position
Putting the relative position with final position on a timeline will not change, that is, controls the length of sub- material and will not change.Control material
Start position be begin to send out with the control material corresponding to control instruction give corresponding controlled device the time shaft moment, tie
Beam position is to terminate the time shaft moment for sending control instruction.
Further, incidence relation can also be set between the control material in same control sub-track, makes to be located at
The time shaft moment of point position correspondence controls control command corresponding to material to be not carried out success earlier, then will not send (collection
Into control platform) or do not perform the time shaft moment corresponding to (controlled device) start position it is later association control material corresponding to control
Instruction, such as folding, the elevating control of curtain.
Further, control sub-track control material before and after can be set certain time length guard time, i.e., control son
Track can not add control material in guard time or can not send control command.
S606:Data are preserved, or according to control track and its attribute of control sub-track, control the attribute of material to generate control
System instruction, and the control instruction is sent to corresponding controlled device.
In addition, the present embodiment also provides a kind of performance integrated control system, as shown in fig. 7, the system includes integrated control platform
70, and select to include audio server 76, video server 77, lighting control module 78 and device control module 79.Wherein,
The integrated control platform 70 includes Multi-track editing playback module 71, and the Multi-track editing playback module 71 can perform above-mentioned performance and integrate
One or more controls in the control of control method sound intermediate frequency, video control, signal light control and device control are controlled, it is specific real
Existing step will not be repeated here.More rail playback editor control modules include audio frequency control module 72, and selection includes regarding
Frequency control module 73, lighting control module 74 and device control module 75.
As shown in figure 8,72 pieces of the audio frequency control mould includes audio track add module 81, audio track attributes edit mould
Block 82, audio material add module 83, audio material attributes edit module 84, audio sub-track add module 85, preservation data/
Audio frequency control instruction module 86 is exported, the function that these modules are realized corresponds with abovementioned steps S201 to S206 respectively,
It will not be repeated here, similarly hereinafter.
Further, the audio broadcasting control principle of the performance integrated control system is as shown in figure 13, the integrated control
System also includes quick playback editor module, is physically entered module, more rails playback editor module, is used described in quick playback editor module
In real-time edition audio material, and send corresponding control instruction and play source document corresponding to audio material to audio server 76
Part, the physical operations key for being physically entered module and corresponding on integrated control platform 70, for the sound to outside input set into control platform
Source carries out real-time tuning control.
Accordingly, audio mixing matrix module, track matrix module, 3x1 output mix modules are provided with the audio server
With physics output module, the audio mixing matrix module can be received from the quick playback editor module, more rails playback editor's mould
Audio source file in the audio server that block is called by control command form is and described and the audio signal that exports
The audio signal of module output is physically entered, the similar track matrix module can also receive above-mentioned each road audio input.Institute
Audio mixing matrix is stated to be used to after carrying out stereo process to each road audio input export to the output mix module, the track matrix
Module is used to after carrying out acoustic image trajectory processing to each road audio input export to the output mix module.The output audio mixing mould
Block can be received from audio mixing matrix module, track matrix module and the audio output for being physically entered module, after 3x1 stereo process
Each physics output interface output through the physics output module.Wherein, acoustic image trajectory processing refers to according to acoustic image track number
It is adjusted according to the level exported to each audio amplifier entity, makes the acoustic image of audio amplifier physical system within the period of setting length
Run or remain stationary as along setting path.
In the present embodiment, the source file of audio material is stored on the audio server outside integrated control platform, and more rails return
The source file that editor module did not directly invoked and handled audio material is put, and simply handles the category corresponding to the audio source file
Property file, pass through and edit the adjustment property file of source file, addition/editor's acoustic image material and audio track and its sub-track
Attribute realize to the indirect control of audio source file, therefore output channel corresponding to each audio track export only for
Control signal/instruction of audio source file, then perform audio source file by receiving the audio server of the control instruction again
Various processing.
As shown in figure 14, more rail playback editor modules receive effective audio material list from audio server 76,
Audio source file is not handled directly, and audio source file is stored in the audio server, is receiving corresponding control command
After recall audio source file and carry out various audio effect processings, such as stereo process is carried out into audio mixing matrix module, into track
Matrix module carries out trajectory processing.Acoustic image material is actually also control command, can both be stored in integrated control platform 70, can also
It is uploaded to audio server.
As shown in figure 9, the video control module 73 includes track of video add module 91, track of video attributes edit mould
Block 92, video material add module 93, video material attributes edit module 94, preservation data/output video control instruction module
95, the function that these modules are realized corresponds with abovementioned steps S401 to 405 respectively.
Further, the video editing of the performance integrated control system is as shown in figure 15 with playing control principle, described
Integrated control platform does not perform the source file of video material directly, but by obtaining video material list and corresponding attribute text
Part sends control instruction to video server, and video server connects to perform the source file of video material further according to control instruction and broadcast
Put and effect operation.
As shown in Figure 10, the lighting control module 74 includes light track add module 110, light track attributes edit
Module 120, light material add module 130, light material attributes edit module 140, preservation data/output signal light control instruction
Module 150, the function that these modules are realized correspond with abovementioned steps S501 to 505 respectively.
Further, the signal light control principle of the performance integrated control system is as shown in figure 16, and the integrated control platform is also
Module is recorded provided with light signal, for recording the light controling signal of lamp control platform output, and to being recorded in recording process
Light controling signal stamp timing code, so as in the control of light track enterprising edlin.
As shown in figure 11, described device control module 75 includes device track add module 151, device track attributes edit
Module 152, control sub-track add module 153, control material add module 154, control material attributes edit module 155, guarantor
Deposit data/output device control instruction module 156, the function that these modules are realized respectively with abovementioned steps S601 to 606 1
One correspondence.
Further, the device control principle of the performance integrated control system is as shown in figure 17, and the integrated control platform is defeated
All kinds of device control signals gone out are exported to corresponding controlled plant through each protocol interface on device adapter.
In addition, the integrated control platform can also include being used for the sound for making (generation) acoustic image track data (i.e. acoustic image material)
As track data generation module, the acoustic image track data obtained through the module is available for more rail playback editor's execution modules to adjust
With so as to control audio server track matrix module to be controlled acoustic image track.Further, the present embodiment provides
One kind becomes rail acoustic image method for controlling trajectory, and the control method is by control main frame (such as integrated control platform, audio server) to entity
The output level value of each audio amplifier node of sound box system is configured, and acoustic image is moved in the total duration of setting in the way of setting
Or it is static, as shown in figure 18, the control method includes:
Generate acoustic image track data;
In the total duration corresponding to the acoustic image track data, according to acoustic image track data, each audio amplifier entity is adjusted
Output level;
In the total duration, it will input to the output electricity of the incoming level of each audio amplifier physical signal and corresponding audio amplifier entity
It is flat to be overlapped to obtain the level of each audio amplifier entity reality output.
Acoustic image track data refers within the period of setting length (i.e. the lasting total duration of acoustic image), in order that integrated control
The acoustic image that each virtual audio amplifier node output level is formed in virtual audio amplifier distribution map on platform is run along path set in advance
Move or remain stationary as, the output level data that each audio amplifier node changes over time.I.e. acoustic image track data contains audio amplifier distribution
Output level delta data of whole audio amplifier nodes in the setting length of time section in map.Come for each audio amplifier node
Say, its output level size changes and changed over time in the setting time section, it is also possible to be zero, negative even
It is negative infinite, it is preferential using negative infinite.
The audio amplifier entity that each audio amplifier node corresponds in entity sound box system, each audio amplifier entity are included positioned at same
One or more audio amplifiers of one opening position.I.e. each audio amplifier node can correspond to one or more co-located audio amplifiers.
In order that entity sound box system can accurately reappear acoustic image path, the virtual audio amplifier of each sound in audio amplifier distribution map
The position distribution of node should audio amplifier provider location distribution each with entity sound box system it is corresponding, in particular so that each audio amplifier node it
Between relative position relation, the relative position relation between each audio amplifier entity is corresponding.
The level of audio amplifier entity reality output is real with the audio amplifier in the level of input signal and above-mentioned acoustic image track data
The output level superposition gained of audio amplifier node corresponding to body.The former be input signal characteristic, the latter can be considered as audio amplifier reality
The characteristic of body itself.At any one time, different input signals just has different incoming levels, and real for same audio amplifier
Body, only an output level.It is, therefore, understood that acoustic image trajectory processing is at the output level to each audio amplifier entity
Reason, to form default acoustic image path effect (including acoustic image transfixion).
Incoming level and the output level superposition of audio amplifier entity can be before audio signal actually enter audio amplifier entity first
Being handled, can also be handled again after audio amplifier entity is entered, this link for depending on whole public address system is formed, with
And whether audio amplifier entity is built-in with audio-frequency signal processing module, such as DSP unit.
The type of acoustic image track data includes:Pinpoint audio-visual-data, become rail acoustic image track data and variable domain acoustic image track.
On integrated control platform during simulation generation acoustic image track data, the speed and process of acoustic image are controlled for convenience, the present invention is real
The line segment that is sequentially connected between some acoustic image TRAJECTORY CONTROL points of the example by discrete distribution in audio amplifier distribution map is applied to represent sound
The path of running of picture, i.e., determine the path of running of acoustic image, Yi Jisheng by several acoustic image TRAJECTORY CONTROL points of discrete distribution
The overall running time of picture.
Acoustic image is pinpointed, is referred within the period of setting length, the one or more audio amplifiers selected in audio amplifier distribution map
Node constantly output level, and unselected audio amplifier node output level numerical value is zero or negative infinite situation.Correspondingly, it is fixed
Point audio-visual-data, refer within the period of setting length, the one or more audio amplifier nodes selected in audio amplifier distribution map are held
Continuous ground output level, and unselected audio amplifier node not output level, or when output level numerical value is zero or negative infinite, Ge Geyin
The output level data that case node changes over time.For selected audio amplifier node, its output level is in the setting time
Continuously (there may also be upper and lower fluctuating change);And for unselected audio amplifier node, its output level in the setting time
Remain negative infinite.
Become rail acoustic image, refer within the period of setting length, in order that acoustic image is run along preset path, each audio amplifier node
According to the situation of certain rule output level.Correspondingly, become rail acoustic image track data, refer within the period of setting length,
In order that acoustic image is run along preset path, the output level data that each audio amplifier node changes over time.Acoustic image runs path simultaneously
Need not be exactly accurate, and acoustic image motion (running) duration will not be very long, it is only necessary to substantially build audience and can recognize that
Acoustic image run effect.
Variable domain acoustic image, refer within the period of setting length, in order that acoustic image is run along predeterminable area, each audio amplifier node
The situation that changes according to certain rule of output level.Correspondingly, variable domain acoustic image track data referred in the time of setting length,
In order that acoustic image is run along predeterminable area, the output level data that each audio amplifier node changes over time.
As shown in figure 19, becoming rail acoustic image track data can obtain by the following method:
Audio amplifier node is set:In audio amplifier distribution map 10, addition or deletion audio amplifier node 11, referring to Figure 20.
Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to output channel, be first
Beginningization level, audio amplifier title etc..Audio amplifier node is represented in audio amplifier distribution map with audio amplifier icon, passes through Mobile loudspeaker box icon
Its coordinate position can be changed.Audio amplifier type refers to full-range cabinet or ultralow frequency audio amplifier, particular type can according to be actually needed into
Row division.Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, and each output channel corresponds to real
An audio amplifier entity in body sound box system, each audio amplifier entity include one or more audio amplifiers at co-located place.I.e.
Each audio amplifier node can correspond to one or more co-located audio amplifiers.It is designed in audio amplifier distribution map in order to reappear
Acoustic image run path, the position distribution of audio amplifier entity should be corresponding with the position distribution of audio amplifier node in audio amplifier distribution map.
Divide delta-shaped region:As shown in figure 20, according to the distribution of audio amplifier node, audio amplifier distribution map is divided multiple three
Angular domain, three summits of each delta-shaped region are audio amplifier node;Each delta-shaped region is not overlapping, and each triangle
Other audio amplifier nodes are not included in shape region, each audio amplifier node and an output channel (or audio playing apparatus) are right
Should;
Further, can also aid in determining delta-shaped region, the auxiliary sound box by setting auxiliary sound box node
Node does not have corresponding output channel, not output level;
Setting acoustic image TRAJECTORY CONTROL point and path of running:What setting acoustic image changed over time in audio amplifier distribution map runs
Path, and several acoustic image TRAJECTORY CONTROL points run positioned at this on path.Acoustic image can be set with the following method to run
Path and acoustic image TRAJECTORY CONTROL point:
1st, fixed point structure:Determine (coordinate) position of several acoustic image TRAJECTORY CONTROL points successively in audio amplifier distribution map,
Several acoustic image TRAJECTORY CONTROL points are in turn connected to form into acoustic image to run path, the acoustic image TRAJECTORY CONTROL point pair of first determination
Should at the time of be zero, at the time of follow-up acoustic image TRAJECTORY CONTROL point corresponds to be from determine first acoustic image TRAJECTORY CONTROL point to determine work as
The time that preceding acoustic image TRAJECTORY CONTROL point is undergone.Such as can be by clicking on sign (such as mouse pointer) audio amplifier distribution
Acoustic image TRAJECTORY CONTROL point is clicked on figure, determines that an acoustic image TRAJECTORY CONTROL point-to-point hits the next acoustic image track control of determination from clicking on
System point elapsed time determines the time span between two acoustic image tracing points, and each acoustic image track is finally calculated
At the time of corresponding to point;
2nd, dragging generation:Dragging marks (such as mouse pointer) along arbitrary line, curve or broken line in audio amplifier distribution map
Motion path so that it is determined that acoustic image is run, during mark is dragged, since initial position, at interval of a period of time Ts all
An acoustic image TRAJECTORY CONTROL point can be generated on the path of running.Ts is 108ms in the present embodiment;
Edit acoustic image TRAJECTORY CONTROL point attribute:The attribute of acoustic image TRAJECTORY CONTROL point includes acoustic image TRAJECTORY CONTROL point coordinates position
At the time of putting, be corresponding, to the time needed for next acoustic image TRAJECTORY CONTROL point.Can be to corresponding to selected acoustic image TRAJECTORY CONTROL point
At the time of, the time needed for the selected acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point and acoustic image run corresponding to path
One or more of total duration is modified.
Assuming that acoustic image TRAJECTORY CONTROL point i is ti at the time of correspondence, acoustic image is run to next track from acoustic image TRAJECTORY CONTROL point i
The point i+1 former required times are ti ', and acoustic image total duration corresponding to path of running is t.This means acoustic image is run from initial position
The time needed to acoustic image TRAJECTORY CONTROL point i is ti, and the time that acoustic image runs through needed for whole path is t.
It is complete before the acoustic image TRAJECTORY CONTROL point if being modified at the time of to corresponding to a certain acoustic image TRAJECTORY CONTROL point
Portion's acoustic image TRAJECTORY CONTROL point each self-corresponding moment, and the run total duration in path of acoustic image are required for being adjusted.If acoustic image
TRAJECTORY CONTROL point i is ti at the time of former corresponding, is Ti at the time of correspondence after modification, any sound before acoustic image TRAJECTORY CONTROL point i
It is Tj at the time of correspondence after adjustment as being tj at the time of TRAJECTORY CONTROL point J is former corresponding, acoustic image is run former total duration corresponding to path
For t, amended total duration is T, then Tj=tj/ti* (Ti-ti), T=t+ (Ti-ti).The adjustment mode that the present invention uses
Advantages of simple, and amount of calculation very little.
It is understood that after time modification corresponding to any acoustic image TRAJECTORY CONTROL point, the time increased or decreased can
Whole acoustic image TRAJECTORY CONTROL points (i.e. aforementioned manner) before distributing to the acoustic image TRAJECTORY CONTROL point in identical duration ratio, also may be used
With the whole acoustic image TRAJECTORY CONTROL points run by each acoustic image of duration pro rate on path.During using latter approach, it is assumed that sound
It is ki as TRAJECTORY CONTROL point i prepares the increased time, then acoustic image TRAJECTORY CONTROL point will be modified to Ti=at the time of correspondence
(ki*ti/t)+ti, i.e. time ki are not all to distinguish dispensing acoustic image TRAJECTORY CONTROL point, and each acoustic image TRAJECTORY CONTROL point is all
Portion of time can be distributed in its ratio with path total duration of running.
If the time needed for a certain acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point is adjusted, then next
At the time of corresponding to acoustic image TRAJECTORY CONTROL point, and the run total duration in path of acoustic image is required for being adjusted.If acoustic image track
Control point i is ti at the time of former corresponding, is Ti at the time of correspondence after modification, acoustic image from current acoustic image TRAJECTORY CONTROL point i run to
It is Ti ' times required after ti ' is changed that next tracing point i+1 former required times, which are, when acoustic image runs former total corresponding to path
A length of t, amended total duration are T, then Ti+1=Ti+Ti ', T=t+ (Ti-ti)+(Ti '-ti ').
If modification acoustic image is run total duration corresponding to path, then the acoustic image is run each acoustic image TRAJECTORY CONTROL on path
It will be all adjusted at the time of corresponding to point and its to the time needed for next acoustic image TRAJECTORY CONTROL point.If acoustic image TRAJECTORY CONTROL point
I is ti at the time of former corresponding, is Ti at the time of correspondence after adjustment, acoustic image is run to next rail from current acoustic image TRAJECTORY CONTROL point i
It is Ti ' the required times after ti ' adjustment that the mark point i+1 former required times, which be, and acoustic image former total duration corresponding to path of running is t,
Amended total duration is T, then Ti=ti/t* (T-t)+ti, Ti '=ti '/t* (T-t)+ti '.
Record becomes rail acoustic image track data:It is each during path of running is run along setting in acoustic image to record each audio amplifier node
The output level numerical value at moment.
For becoming for rail acoustic image, the output electricity of the related audio amplifier node for generating acoustic image can be calculated by the following method
Level values.As shown in figure 21, it is assumed that acoustic image tracing point i (being not necessarily acoustic image TRAJECTORY CONTROL point) is located to be enclosed by three audio amplifier nodes
In the delta-shaped region formed, acoustic image tracing point i is ti at the time of correspondence, and now the three of vertex position audio amplifier node will be defeated
Go out a certain size level, the output level value of other audio amplifier nodes in audio amplifier distribution map beyond these three audio amplifier nodes
It is zero or negative infinite, so as to ensure that the acoustic image at ti moment in audio amplifier distribution map is located at above-mentioned acoustic image tracing point i.For this three
The audio amplifier node A of any apex of angular domain, this moment ti output level are dBA1=10*lg (LA’/LA), wherein LA’
For remaining two straight distance of summit institute structure of the acoustic image tracing point to the delta-shaped region, LAFor the audio amplifier node A to remaining
The two straight distances of summit institute structure;
Further, each audio amplifier node can also set initialization level value.Assuming that above-mentioned audio amplifier node A initialization
Level is dBA, then in above-mentioned moment ti, audio amplifier node A1 output level dBA1'=dBA+10*lg(LA’/LA).Its lingering sound
Case node set initialization level after t output level by that analogy.
Further, as shown in figure 20, any one is not fallen within if any part acoustic image tracing point (or acoustic image run path)
In the delta-shaped region be made up of three audio amplifier nodes (movement locus end), then auxiliary sound box node 13 can be set to set
New delta-shaped region, to ensure that whole acoustic image tracing points are each fallen within corresponding delta-shaped region, the auxiliary sound box node
Without corresponding output channel, output level, is not only used for auxiliary and determines delta-shaped region;
Further, when recording the output level value of each audio amplifier node, can continuously record, can also be according to certain
Frequency records.For the latter, refer to the output level numerical value that each audio amplifier node is recorded once at interval of certain time.In this reality
Apply in example, acoustic image is recorded in along the output for setting each audio amplifier node when path is run using the frequency of 25 frames/second or 30 frames/second
Level value.The output level data of each audio amplifier node are recorded by certain frequency, it is possible to reduce data volume, accelerate to inputting audio
Signal carries out processing speed during acoustic image trajectory processing, ensures that acoustic image is run the real-time of effect.
As shown in figure 22, variable domain acoustic image track data can obtain by the following method:
S501:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node.
S502:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to
Road, initialization level, audio amplifier title etc..Audio amplifier node is represented in audio amplifier distribution map with audio amplifier icon, passes through mobile sound
Case icon can change its coordinate position.Audio amplifier type refers to full-range cabinet or ultralow frequency audio amplifier, and particular type can be according to reality
Need to be divided.Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, each output channel pair
One or more sounds at co-located place should be included in an audio amplifier entity in entity sound box system, each audio amplifier entity
Case.I.e. each audio amplifier node can correspond to one or more co-located audio amplifiers.In order to reappear in audio amplifier distribution map
Designed acoustic image is run path, and the position distribution of audio amplifier entity should be with the position distribution pair of audio amplifier node in audio amplifier distribution map
Should.
S503:Setting acoustic image, which is run, path and divides acoustic image region:Multiple acoustic image regions are set in audio amplifier distribution map,
Each acoustic image region includes several audio amplifier nodes, and sets the path of running for traveling through each acoustic image region.I.e. by acoustic image area
Domain is considered as one " acoustic image point ", and acoustic image is run to another region from a region, until running through whole acoustic image regions successively.Can
, can also be quick to set in the following manner with any acoustic image region for setting each complementary overhangs in audio amplifier distribution map
Acoustic image region:
Set straight line acoustic image to run path in audio amplifier distribution map, and several acoustic image areas are set along acoustic image path of running
Domain, the border in each acoustic image region are approximately perpendicular to the direction of running of the acoustic image.These acoustic image regions can be arranged side by side, and also may be used
To be arranged at intervals, but in order to ensure to give birth to the continuity of acoustic image movement (running), mode is arranged side by side in prioritizing selection.These acoustic image areas
The gross area in domain is less than or equal to the area of whole audio amplifier distribution map.When dividing acoustic image region, wide division can be used,
Not wide division can be used.
During concrete operations, can by drag sign (such as mouse pointer) come and meanwhile set acoustic image to run path and division
Acoustic image region.Specifically:Dragging sign is moved to end in audio amplifier distribution map from a certain start position along some direction
Point position, while according to impartial several acoustic image regions of division of air line distance of the start position to the final position, Ge Gesheng
As the border in region is perpendicular to the straight line of the start position to the final position, and the width in each acoustic image region is impartial.Sound
As running total duration the time that middle final position undergone is moved to drag sign from original position.
Assuming that air line distance of the sign from start position to final position is R, total duration used is t, and equalization is drawn
The quantity for dividing acoustic image region is n, then the n acoustic image region that width is R/n will be automatically generated, and corresponding to each acoustic image region
At the time of be t/n.
S504:Edit acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region is to next
The time required to acoustic image region and acoustic image is run total duration.The editor of acoustic image area attribute compiles with becoming rail acoustic image tracing point attribute
Collect similar.If being modified at the time of to corresponding to a certain acoustic image region, whole acoustic image regions before the sound area domain are each
At the time of corresponding, and the total duration that acoustic image is run is required for being adjusted.If to a certain acoustic image region to next acoustic image region
The required time is adjusted, then at the time of corresponding to next acoustic image region, and acoustic image total duration of running is required for carrying out
Adjustment.If modification acoustic image is run total duration, then at the time of corresponding to each acoustic image region that the acoustic image is run on path and its
It will be all adjusted to the time needed for next acoustic image region.
S505:Variable domain acoustic image track data is recorded, each audio amplifier node is recorded and is run successively in acoustic image along path of running is set
During each acoustic image region, the output level numerical value at each moment.
For variable domain acoustic image, the output electricity of the related audio amplifier node for generating acoustic image can be calculated by the following method
Level values.
As shown in figure 23, it is assumed that the acoustic image of a certain variable domain track total duration of running is t, is divided into 4 equal widths altogether
Acoustic image region, acoustic image of the acoustic image along straight line is run, and from some acoustic image regions 1, (acoustic image region i) is to next acoustic image region in path
2 (acoustic image region i+1) are mobile, and the run midpoint of line segment that path is located in acoustic image region 1 of acoustic image is acoustic image TRAJECTORY CONTROL point 1
(acoustic image TRAJECTORY CONTROL point i), the run midpoint of line segment that path is located in acoustic image region 2 of acoustic image is the (sound of acoustic image TRAJECTORY CONTROL point 2
As TRAJECTORY CONTROL point i+1).During acoustic image tracing point P runs to next acoustic image region 2 from current acoustic image region 1, acoustic image
The output level of each audio amplifier node is domain 1dB (domain dB in region 1i), the output level of each audio amplifier node in acoustic image region 2
For domain 2dB (domain dBi+1), the audio amplifier node output level beyond the two acoustic image regions is zero or negative infinite.
Domain 1dB values=10logeη÷2.3025851
Domain 2dB values=10logeβ÷2.3025851
Wherein, l12The distance of acoustic image TRAJECTORY CONTROL point 2, l are arrived for acoustic image TRAJECTORY CONTROL point 11PFor acoustic image TRAJECTORY CONTROL point 1
To acoustic image tracing point P distance, lp2Distance for current acoustic image tracing point P to acoustic image TRAJECTORY CONTROL point 2.Can be with from above-mentioned formula
Find out that each acoustic image tracing point there are two acoustic image region output levels, but when acoustic image tracing point is located at the control of each acoustic image track
During system point, only one of which acoustic image region output level, such as when acoustic image tracing point P moves to acoustic image TRAJECTORY CONTROL point 2, this
When there was only the output level of acoustic image region 2, and the output level in acoustic image region 1 is zero.
When recording the output level value of each audio amplifier node in variable domain acoustic image track, can continuously record, can also be according to one
Fixed frequency records.For the latter, refer to the output level numerical value that each audio amplifier node is recorded once at interval of certain time.
In the present embodiment, each audio amplifier node when acoustic image edge setting path is run is recorded in using the frequency of 25 frames/second or 30 frames/second
Output level value.The output level data of each audio amplifier node are recorded by certain frequency, it is possible to reduce data volume, accelerate to input
Audio signal carries out processing speed during acoustic image trajectory processing, ensures that acoustic image is run the real-time of effect.
As shown in figure 24, fixed point acoustic image track data can obtain by the following method:
S701:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node.
S702:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to
Road, initialization level, audio amplifier title etc..
S703:Acoustic image tracing point and total duration are set, one or more audio amplifier nodes, institute are selected in audio amplifier distribution map
Selected each audio amplifier node sets acoustic image tracing point in each audio amplifier node residence time as acoustic image tracing point.
S704:Record fixed point acoustic image track data:Record the output at each audio amplifier node each moment in above-mentioned total duration
Level numerical value.
In addition, the acoustic image track data of the present invention also includes audio amplifier link data.Audio amplifier link refers to hold audio amplifier node
Row is operation associated, when associating the active audio amplifier node output level in audio amplifier node, associates the passive sound box section of audio amplifier node
Put automatic output level.Audio amplifier link data are passive sound boxes after being associated operation to several selected audio amplifier nodes
Node relative to active audio amplifier node output level difference.For being necessary the audio amplifier node of link association in spatial distribution
Distance can relatively.
As shown in figure 25, audio amplifier link data can obtain by the following method:
S801:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node.
S802:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to
Road, initialization level, audio amplifier title etc..
S803:Audio amplifier node link relation is set:Selected ultralow frequency audio amplifier node is connected to neighbouring multiple full ranges
Audio amplifier node;
S804:Record audio amplifier link data:Calculate and record the output level DerivedTrim of the ultralow frequency audio amplifier,
The output level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10(Trim-i+LinkTrim-i)/10, its
Middle Trim-i is the output level value of any full-range cabinet node i itself, and LinkTrim-i is the full-range cabinet section
The former settings of point i and the level that links of the ultralow frequency audio amplifier, DeriveddB are the initialization level of the ultralow frequency audio amplifier node
Value, DerivedTrim are the output level that the ultralow frequency audio amplifier node sets are linked to after some full-range cabinet nodes
Value.One ultralow frequency audio amplifier node may be configured as linking to one or more full-range cabinet nodes, after link, when full-range cabinet section
Point output level, then the ultralow frequency audio amplifier node linked with it will automatic output level, to coordinate full-range cabinet node
Build certain sound effect.For a ultralow frequency audio amplifier node link to a full-range cabinet node, both only need to be considered
Distance, source of sound property and required audio etc., ultralow frequency audio amplifier node can be set and follow the full-range cabinet node automatically
Output level during broadcasting, that is, link level.
As shown in figure 26, it is assumed that the ultralow frequency audio amplifier node 24 in audio amplifier distribution map links to 3 neighbouring full range sounds
Case node, itself output level value of full-range cabinet node are respectively Trim1, Trim2 and Trim3, ultralow frequency audio amplifier node 24
Originally the level value that links with each full-range cabinet point was respectively LinkTrim1, LinkTrim2 and LinkTrim3.If level is overall
Addition ratio is Ratio, and ultralow frequency audio amplifier node 4 itself initialization level value is DeriveddB, last ultralow frequency audio amplifier node
4 output level value are DerivedTrim, then have:
Ratio=10(Trim1+LinkTrim1)/10+10(Trim2+LinkTrim2)/10+10(Trim3+LinkTrim3)/10
DerivedTrim=10*log (Ratio)+DeriveddB
When Ratio is more than 1, ultralow frequency audio amplifier node 24 is linked to gained output level after these three full-range cabinet nodes
For 0, i.e., its final output level value is initialization level value.
Claims (9)
- A kind of 1. panorama multi-channel audio control method based on the control of variable domain acoustic image, it is characterised in that including:Audio track is added, one or more audio tracks that are parallel and being aligned in time shaft are added on display interface, each The corresponding output channel of the audio track;Edit audio track attribute;Audio material is added, one or more audio materials, and generation and the sound in audio track are added in audio track Audio material icon corresponding to frequency material, the length of the audio track occupied by the audio material icon are total with the audio material Duration matches;Edit audio material attribute, the audio material attribute include start position, final position, the time started, the end time, Total duration and reproduction time length;Add audio sub-track, addition one or more audio sub-tracks corresponding with wherein one audio track, it is each described in Audio sub-track is parallel to the time shaft, the output channel pair of the corresponding audio track of the audio sub-track Should, the type of the audio sub-track includes acoustic image sub-track;Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track, and in the acoustic image Acoustic image material icon corresponding with the acoustic image material is generated in sub-track, the acoustic image sub-track occupied by the acoustic image material icon Length matches with the total duration corresponding to the acoustic image material;Edit acoustic image sub-track attribute;Edit acoustic image material attribute, the acoustic image material attribute also include start position, final position, the time started, at the end of Between, total duration and reproduction time length;The acoustic image material is acoustic image track data, and the acoustic image track data includes variable domain acoustic image track data and fixed point acoustic image Track data, wherein:The variable domain acoustic image track data obtains by the following method:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node;Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponding output channel and initialization Level;Setting acoustic image, which is run, path and divides acoustic image region:Multiple acoustic image regions, each acoustic image are set in audio amplifier distribution map Region includes several audio amplifier nodes, and sets the path of running for traveling through each acoustic image region;Edit acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region to next acoustic image region Required time and acoustic image are run total duration;Variable domain acoustic image track data is recorded, each audio amplifier node is recorded and is run successively by each sound in acoustic image along path of running is set During region, the output level numerical value at each moment;The acoustic image track data includes fixed point acoustic image track data, and the fixed point acoustic image track data obtains by the following method Arrive:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node;Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponding output channel and initialization Level;Acoustic image tracing point and total duration are set, one or more audio amplifier nodes are selected in audio amplifier distribution map, that selectes is each Individual audio amplifier node sets acoustic image tracing point in each audio amplifier node residence time as acoustic image tracing point;Record fixed point acoustic image track data:The output level numerical value at each moment that records each audio amplifier node in above-mentioned total duration.
- 2. the panorama multi-channel audio control method according to claim 1 based on the control of variable domain acoustic image, it is characterised in that When generating the variable domain acoustic image track data, each audio amplifier node is run by each successively in acoustic image along path of running is set The output level numerical computation method at each moment is during individual acoustic image region:If the acoustic image of a certain variable domain track is run always Shi Changwei t, are divided into the acoustic image region of n equal width altogether, and acoustic image of the acoustic image along straight line runs path from some acoustic image area Domain 1 is moved to next acoustic image region 2, and the run midpoint of line segment that path is located in acoustic image region 1 of acoustic image is the control of acoustic image track System point 1, the run midpoint of line segment that path is located in acoustic image region 2 of acoustic image is acoustic image TRAJECTORY CONTROL point 2, current time acoustic image rail During mark point P runs to next acoustic image region 2 from current acoustic image region 1, the output of each audio amplifier node in acoustic image region 1 Level is domain 1dB, and the output level of each audio amplifier node is domain 2dB in acoustic image region 2, the audio amplifier beyond the two acoustic image regions Node output level is zero or negative infinite, and:<mrow> <mi>&eta;</mi> <mo>=</mo> <mfrac> <msub> <mi>l</mi> <mrow> <mi>p</mi> <mn>2</mn> </mrow> </msub> <msub> <mi>l</mi> <mn>12</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msub> <mi>l</mi> <mn>12</mn> </msub> <mo>-</mo> <msub> <mi>l</mi> <mrow> <mn>1</mn> <mi>p</mi> </mrow> </msub> </mrow> <msub> <mi>l</mi> <mn>12</mn> </msub> </mfrac> </mrow>Domain 1dB values=10logeη÷2.3025851<mrow> <mi>&beta;</mi> <mo>=</mo> <mfrac> <msub> <mi>l</mi> <mrow> <mn>1</mn> <mi>p</mi> </mrow> </msub> <msub> <mi>l</mi> <mn>12</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msub> <mi>l</mi> <mn>12</mn> </msub> <mo>-</mo> <msub> <mi>l</mi> <mrow> <mi>p</mi> <mn>2</mn> </mrow> </msub> </mrow> <msub> <mi>l</mi> <mn>12</mn> </msub> </mfrac> </mrow>Domain 2dB values=10logeβ÷2.3025851Wherein, the l12For current time acoustic image TRAJECTORY CONTROL point P to acoustic image TRAJECTORY CONTROL point i distance, the l1PFor acoustic image TRAJECTORY CONTROL point i to acoustic image tracing point P distance, the lp2Distance for acoustic image tracing point P to acoustic image TRAJECTORY CONTROL point i+1.
- 3. the panorama multi-channel audio control method according to claim 1 based on the control of variable domain acoustic image, it is characterised in that When generating the variable domain acoustic image track data, when recording the output level value of each audio amplifier node in variable domain acoustic image track, continuously Record, or recorded according to certain frequency.
- 4. the panorama multi-channel audio control method according to claim 3 based on the control of variable domain acoustic image, it is characterised in that When generating the variable domain acoustic image track data, the frequency is 25 frames/second or 30 frames/second.
- 5. the panorama multi-channel audio control method according to claim 1 based on the control of variable domain acoustic image, it is characterised in that When generating the variable domain acoustic image track data, set acoustic image run path and divide the mode in acoustic image region as:Dragging instruction Mark is moved to final position in audio amplifier distribution map from a certain start position along some direction, while according to the start position Impartial several acoustic image regions of division of air line distance to the final position, the border in each acoustic image region is perpendicular to the starting point position Put to the straight line in the final position, and the width in each acoustic image region is impartial, the total duration that acoustic image is run is dragging sign The time that final position undergone is moved to from original position.
- 6. the panorama multi-channel audio control method according to claim 1 based on the control of variable domain acoustic image, it is characterised in that When generating the variable domain acoustic image track data, in the audio amplifier distribution map, each acoustic image region non-overlapping copies.
- 7. the panorama multi-channel audio control method according to claim 1 based on the control of variable domain acoustic image, it is characterised in that When generating the variable domain acoustic image track data, at the time of editing corresponding to some described acoustic image region, if certain acoustic image region i Original is ti at the time of corresponding, is Ti at the time of correspondence after adjustment, acoustic image is run to next acoustic image region i+ from current acoustic image region i It is Ti ' the required times after ti ' adjustment that 1 former required time, which be, and acoustic image former total duration corresponding to path of running is t, after modification Total duration be T, then Ti=ti/t* (T-t)+ti, Ti '=ti '/t* (T-t)+ti '.
- 8. the panorama multi-channel audio control method according to claim 1 based on the control of variable domain acoustic image, it is characterised in that Methods described also includes:Add audio sub-track;The attribute of audio sub-track is edited, the attribute of the audio sub-track includes audio effect processing parameter, by changing audio The sound effect parameters of track adjust the audio that the affiliated audio track of audio sub-track corresponds to output channel.
- 9. the panorama multi-channel audio control method according to claim 8 based on the control of variable domain acoustic image, it is characterised in that:The type of the audio sub-track includes volume, gain sub-track and EQ sub-tracks, and each audio track sets a sound Amount and gain sub-track, and one or more EQ sub-tracks, the volume and gain sub-track are used for affiliated audio track The signal level size of corresponding output channel is adjusted, and the EQ sub-tracks are used for output corresponding to affiliated audio track The signal of the output of passage carries out EQ audio effect processings.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310754855.9A CN104754243B (en) | 2013-12-31 | 2013-12-31 | Panorama multi-channel audio control method based on the control of variable domain acoustic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310754855.9A CN104754243B (en) | 2013-12-31 | 2013-12-31 | Panorama multi-channel audio control method based on the control of variable domain acoustic image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104754243A CN104754243A (en) | 2015-07-01 |
CN104754243B true CN104754243B (en) | 2018-03-09 |
Family
ID=53593284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310754855.9A Active CN104754243B (en) | 2013-12-31 | 2013-12-31 | Panorama multi-channel audio control method based on the control of variable domain acoustic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104754243B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106937022B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | multi-professional collaborative editing and control method for audio, video, light and machinery |
CN106937023B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | multi-professional collaborative editing and control method for film, television and stage |
CN106937021B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | performance integrated control method based on time axis multi-track playback technology |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1561144A (en) * | 2004-03-12 | 2005-01-05 | 陈健俊 | 3D8-X stero amplifying system |
CN101916095A (en) * | 2010-07-27 | 2010-12-15 | 北京水晶石数字科技有限公司 | Rehearsal performance control method |
CN103338420A (en) * | 2013-05-29 | 2013-10-02 | 陈健俊 | Control method of panoramic space stereo sound |
CN203241818U (en) * | 2013-05-29 | 2013-10-16 | 黄博卿 | Stage light and audio integration system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2795402A4 (en) * | 2011-12-22 | 2015-11-18 | Nokia Technologies Oy | A method, an apparatus and a computer program for determination of an audio track |
-
2013
- 2013-12-31 CN CN201310754855.9A patent/CN104754243B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1561144A (en) * | 2004-03-12 | 2005-01-05 | 陈健俊 | 3D8-X stero amplifying system |
CN101916095A (en) * | 2010-07-27 | 2010-12-15 | 北京水晶石数字科技有限公司 | Rehearsal performance control method |
CN103338420A (en) * | 2013-05-29 | 2013-10-02 | 陈健俊 | Control method of panoramic space stereo sound |
CN203241818U (en) * | 2013-05-29 | 2013-10-16 | 黄博卿 | Stage light and audio integration system |
Also Published As
Publication number | Publication date |
---|---|
CN104754243A (en) | 2015-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104754178B (en) | audio control method | |
CN106937022A (en) | Audio, video, light, mechanical multi-specialized collaborative editing and control method | |
CN104754186B (en) | Apparatus control method | |
US9142259B2 (en) | Editing device, editing method, and program | |
CN108021714A (en) | A kind of integrated contribution editing system and contribution edit methods | |
CN104750059B (en) | Lamp light control method | |
CN104750058B (en) | Panorama multi-channel audio control method | |
US10541003B2 (en) | Performance content synchronization based on audio | |
CN106937021A (en) | Performance integrated control method based on many rail playback technologies of time shaft | |
CN104754243B (en) | Panorama multi-channel audio control method based on the control of variable domain acoustic image | |
CN104754244B (en) | Panorama multi-channel audio control method based on variable domain audio-visual effects | |
CN104754242B (en) | Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image | |
CN104750051B (en) | Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image | |
CN103118322B (en) | A kind of surround sound audio-video processing system | |
CN104751869B (en) | Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image | |
CN106937023A (en) | Towards video display, the multi-specialized collaborative editing of stage and control method | |
CN104754241B (en) | Panorama multi-channel audio control method based on variable domain acoustic image | |
CN104750055B (en) | Based on the panorama multi-channel audio control method for becoming rail audio-visual effects | |
CN106937204B (en) | Panorama multichannel sound effect method for controlling trajectory | |
CN106937205B (en) | Complicated sound effect method for controlling trajectory towards video display, stage | |
CN104754447B (en) | Based on the link sound effect control method for becoming rail acoustic image | |
CN104754449B (en) | Sound effect control method based on variable domain acoustic image | |
CN104754451B (en) | Pinpoint acoustic image method for controlling trajectory | |
CN104754458B (en) | Link sound effect control method | |
CN115103293A (en) | Object-oriented sound reproduction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |