CN104750059B - Lamp light control method - Google Patents

Lamp light control method Download PDF

Info

Publication number
CN104750059B
CN104750059B CN201310754824.3A CN201310754824A CN104750059B CN 104750059 B CN104750059 B CN 104750059B CN 201310754824 A CN201310754824 A CN 201310754824A CN 104750059 B CN104750059 B CN 104750059B
Authority
CN
China
Prior art keywords
track
acoustic image
audio
light
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310754824.3A
Other languages
Chinese (zh)
Other versions
CN104750059A (en
CN104750059B8 (en
Inventor
周利鹤
黄石锋
邓俊曦
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leafun Culture Science and Technology Co Ltd
Original Assignee
Guangzhou Leafun Culture Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leafun Culture Science and Technology Co Ltd filed Critical Guangzhou Leafun Culture Science and Technology Co Ltd
Priority to CN201310754824.3A priority Critical patent/CN104750059B8/en
Publication of CN104750059A publication Critical patent/CN104750059A/en
Publication of CN104750059B publication Critical patent/CN104750059B/en
Application granted granted Critical
Publication of CN104750059B8 publication Critical patent/CN104750059B8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0421Multiprocessor system

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Studio Circuits (AREA)
  • Television Signal Processing For Recording (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The present invention relates to device for performing arts control technology, specifically a kind of lamp light control method.This method includes:Time shaft is shown on the display interface of integrated control platform;The track for being controlled to corresponding performing device is added and/or deletes, the track includes light track;Editing rail attribute;Add material;Edit material attribute;Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.The present invention can solve the editors of current items on the program and Synchronization Control and acoustic image and run the technical barrier of effect control.

Description

Lamp light control method
Technical field
The present invention relates to device for performing arts control technology, specifically a kind of lamp light control method.
Background technology
In items on the program layout process, a comparison distinct issues are that each specialty (refers to audio, video, light, machine Tool etc.) between coordination and Synchronization Control.In large-scale performance, each specialty is relatively independent, it is necessary to which one huger Troop could ensure the smooth layout and performance of performance.And during each professional program of layout, the most of the time is all spent Coordination between specialty and it is synchronous above, and compared with the time being really absorbed in program in itself may be much more.
Because each specialty is relatively independent, control mode differs greatly.To carry out live audio-visual synchronization editor, depending on Frequency is controlled by lamp control platform, and audio plays back editor control by more rails, and audio is easy to navigate to the arbitrary time and started back Put, but frame number (can only can be adjusted to correspondence position manually by operating personnel, but can not followed the time by video from the beginning Code starts), this is short of enough flexibilities for live performance control.
In addition, after the audio amplifier position of the Professional sound box system of existing video display, stage is fixed, pass through stage both sides Left and right passage master amplifies or acoustic image is substantially set in the middle position of stage by main amplify in the road of left, center, right three, although performance field In addition to the master on stage amplifies, large number of audio amplifier is also provided with each position, but whole field is performed, audio amplifier The acoustic image of system almost can seldom change.
Therefore, solve the editor of current items on the program and Synchronization Control and the run flexible control of effect of acoustic image is all this Technical field key technical problem urgently to be resolved hurrily.
The content of the invention
Present invention solves the technical problem that be to provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and The performance integrated control method flexibly quickly set can be carried out to the acoustic image of sound reinforcement system.
In order to solve the above technical problems, the technical solution adopted by the present invention is:
A kind of lamp light control method, including:
Time shaft is shown on the display interface of integrated control platform;
The track for being controlled to corresponding performing device is added and/or deletes, the track includes light track;
Editing rail attribute;
Add material;
Edit material attribute;
Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.
Compared with prior art, have the beneficial effect that:Playback master control embodies the theory of " performance integrated management ".From technology Angle for, the lotus roots of these units closes that property is very low, and they can be with operating alone without influencing each other, uniquely than more prominent Contact be " time ", i.e., when what is broadcasting.For the angle used from user, the relation of " time " is Their most concerned things.Checked and managed if the state of these units can be concentrated in together, user is omitted from Many unnecessary troubles.Such as coordinate the stationary problem between unit, each professional mutual reference in editing saving With contrasting amendment etc..
Brief description of the drawings
Fig. 1 is the performance integrated control method schematic diagram of embodiment.
Fig. 2 is the method schematic diagram of the audio frequency control part of the performance integrated control method of embodiment.
Fig. 3 is the operating method schematic diagram of the audio sub-track of the performance integrated control method of embodiment.
Fig. 4 is the method schematic diagram of the video control portions of the performance integrated control method of embodiment.
Fig. 5 is the method schematic diagram of the signal light control part of the performance integrated control method of embodiment.
Fig. 6 is the method schematic diagram of the apparatus control portion point of the performance integrated control method of embodiment.
Fig. 7 is the principle schematic of the performance integrated control system of embodiment.
Fig. 8 is the principle schematic of the audio frequency control module of the performance integrated control system of embodiment.
Fig. 9 is the principle schematic of the video control module of the performance integrated control system of embodiment.
Figure 10 is the principle schematic of the lighting control module of the performance integrated control system of embodiment.
Figure 11 is the principle schematic of the device control module of the performance integrated control system of embodiment.
Figure 12 is more rails playback editor module interface schematic diagram of the performance integrated control method of embodiment.
Figure 13 is the principle schematic of the audio frequency control part of the performance integrated control system of embodiment.
Figure 14 is the principle schematic of the track matrix module of the performance integrated control system of embodiment.
Figure 15 is the principle schematic of the video control portions of the performance integrated control system of embodiment.
Figure 16 is the principle schematic of the signal light control part of the performance integrated control system of embodiment.
Figure 17 is the principle schematic of the apparatus control portion point of the performance integrated control system of embodiment.
Figure 18 is the step schematic diagram of the change rail acoustic image method for controlling trajectory of embodiment.
Figure 19 is the change rail acoustic image track data generation method step schematic diagram of embodiment.
Figure 20 is the audio amplifier distribution map of embodiment and becomes rail acoustic image track schematic diagram.
Figure 21 is the triangle audio amplifier node schematic diagram of embodiment.
Figure 22 is the variable domain acoustic image track data generation method step schematic diagram of embodiment.
Figure 23 is the audio amplifier distribution map and variable domain acoustic image track schematic diagram of embodiment.
Figure 24 is the fixed point acoustic image track data generation method step schematic diagram of embodiment.
Figure 25 is the audio amplifier link data creation method step schematic diagram of embodiment.
Figure 26 is the audio amplifier link schematic diagram of embodiment.
Embodiment
The acoustic image TRAJECTORY CONTROL all types of to the present invention is further described below in conjunction with the accompanying drawings.
The present embodiment provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and can be to sound reinforcement system Acoustic image carry out the flexibly performance integrated control method that quickly sets.More rails playback editor's mould that this method passes through integrated control platform Block, realize concentration layout and control to multiple professional materials.As shown in figure 1, the performance integrated control method(Signal light control Method)Comprise the following steps:
S101:Time shaft is shown on the display interface of integrated control platform;
S102:Add and/or delete the track for being controlled to corresponding performing device;
S103:Editing rail attribute;
S104:Add material;
S105:Edit material attribute;
S106:Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.
As shown in Fig. 2 and Figure 12, the performance integrated control method is included for more rail audio playback controls(With following audios Control module is corresponding), specifically include following steps:
S201:Audio track is added, one or more that is parallel and being aligned in the time shaft is added on display interface Audio track(Region)1st, 2, each corresponding output channel of the audio track.
S202:Audio track attribute is edited, it is Jing Yin that editable audio track attribute includes lock on track, track.Track Whether Jing Yin attribute can control audio material on this track and all sub-tracks Jing Yin, be the master control of audio track.Track locks Determine attribute to can control on track except Jing Yin and in addition to sub-track is hidden in addition etc. outside individual attribute, in other attributes and audio track Material position and material attribute can not be changed.
S203:Add audio material, added in audio track 1,2 one or more audio materials 111,112,113, 211st, 212,213,214, and audio material corresponding with the audio material is generated in audio track, occupied by the audio material The length of audio track match with the total duration of the audio material.Before audio material is added, first from audio server Audio material list is obtained, then selecting audio material addition from the audio material list again enters audio track.Work as audio After material is added to audio track, the audio attribute file corresponding with the audio material will be generated, integrated control platform passes through editor Audio attribute file is sent to the instruction of audio server to control, rather than directly invokes or edit source corresponding to audio material File, it is ensured that the security of source file and the stability of integrated control platform.
S204:Edit audio material attribute, the audio material attribute include start position, final position, the time started, End time, total duration, reproduction time length.Wherein, the start position is the audio material start position(Along Vertical Square To)Corresponding time shaft moment, the final position are the audio material final position(Vertically)When corresponding The countershaft moment, the time started be the audio material on a timeline actually commence play out the moment, the end time is The physical end play position of the audio material on a timeline.In general, the time started can be delayed in start position, terminate Time can shift to an earlier date in final position.Total duration refers to the script time span of audio material, start position to terminal position when Between difference be audio material total duration, reproduction time length refers to the reproduction time length of the audio material on a timeline, The time difference of time started and end time are the reproduction time length of the audio material.By adjusting time started and end Time can realize the shearing manipulation to acoustic image material, i.e., only play the part that user wishes to play.
Pass through adjustment(Transverse shifting)Position of the audio material in audio track can change start position and terminal position Put, but relative position of the start position with final position on a timeline will not change, i.e., and the length of audio material will not change. Between at the beginning of by adjusting audio material and the end time can change the actual play time of audio material on a timeline And its length.Multiple audio materials can be placed in one audio track, are represented within the period represented by time shaft, can be with (Through corresponding output channel)Multiple audio materials are played successively.It should be noted that the audio material in any audio track Position(Time location)Can freely it adjust, but should not be overlapping between each audio material.
Further, because integrated control platform is simply controlled to property file corresponding to audio material, therefore integrated control Platform can also carry out cutting operation and concatenation to audio material.Operation is cut to refer to an audio element on audio track Material is divided into multiple audio materials, while each audio material after segmentation has each self-corresponding property file, now source file Still intact, integrated control platform sends control command according to these new property files and calls source file to be played accordingly successively Operated with audio.Similar, concatenation refers to two audio materials being merged into an audio material, its each self-corresponding category Property Piece file mergence be a property file, by a property file send control audio server call two audio source documents Part.
Further, multigroup application entity behaviour corresponding with each audio track respectively can also be set on integrated control platform Make key, to manually adjust the attribute of audio material by physical operation key.Such as increase to the audio material position in audio track Put(Time shaft position)The material reproduction time adjustment knob of front and rear adjustment.
S205:Add audio sub-track 12,13,14,15,21,22, addition and wherein one audio track corresponding one Individual or multiple audio sub-tracks, each audio sub-track are corresponding parallel to the time shaft, the audio sub-track The output channel of the audio track is corresponding.
Each audio track can have an attached audio sub-track, the type of audio sub-track include acoustic image sub-track and Audio sub-track.Wherein, the acoustic image sub-track is used to carry out acoustic image to the part or all of audio material of affiliated audio track Trajectory processing, the audio sub-track are used to carry out audio effect processing to the part or all of audio material of affiliated audio track. In this step, it can further perform the step of:
S301:Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track 121st, 122, and acoustic image material corresponding with the acoustic image material, the sound occupied by the acoustic image material are generated in the acoustic image sub-track Total duration as corresponding to the length of sub-track with the acoustic image material matches.
S302:Acoustic image sub-track attribute is edited, similar with the audio track, editable acoustic image sub-track attribute includes Lock on track, track are Jing Yin.
S303:Acoustic image material attribute is edited, similar with the audio material, the acoustic image material attribute also includes starting point position Put, final position, the time started, the end time, total duration, reproduction time length.
By the acoustic image material on acoustic image sub-track, can between the acoustic image material time started and end time when Between in section, acoustic image trajectory processing is carried out to the signal that output channel corresponding to the affiliated audio track of acoustic image sub-track exports.Cause This adds different types of acoustic image material on acoustic image sub-track, can carry out inhomogeneity to the signal that corresponding output channel exports The acoustic image trajectory processing of type;And by adjust the start position of each acoustic image material, final position, the time started and at the end of Between, time and acoustic image path effect duration that acoustic image trajectory processing starts can be adjusted.
The difference of acoustic image material and audio material is that what audio material represented is voice data.Acoustic image track data is Within the period of setting length, in order that the acoustic image edge that each virtual audio amplifier node output level is formed in audio amplifier distribution map Path set in advance is run or remained stationary as, the output level data that each audio amplifier node changes over time.That is acoustic image track number According to containing output level delta data of whole audio amplifier nodes in the setting length of time section in audio amplifier distribution map.Acoustic image The type of track data includes fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image track data, acoustic image rail The type of mark data determines the type of acoustic image material, and the acoustic image motion total duration corresponding to acoustic image track data determines acoustic image The total duration of time difference between material start position and final position, i.e. acoustic image material.Acoustic image trajectory processing refers to according to sound As the size of track data pair each audio amplifier entity reality output level corresponding with each audio amplifier node is adjusted, make audio amplifier The acoustic image of physical system is run or remained stationary as in the period interior edge setting path of setting length.
S304:Audio sub-track is added, the type of the audio sub-track includes volume and gain sub-track 13,22, EQ A volume and gain sub-track, and one or more EQ sub-tracks can be set in sub-track 14,15,21, each audio track. Wherein, the volume and gain sub-track are used to adjust the signal level size of output channel corresponding to affiliated audio track Whole, the EQ sub-tracks are used to carry out EQ audio effect processings to the signal of the output of output channel corresponding to affiliated audio track.
S305:Edit the attribute of audio sub-track, the attribute of the audio sub-track include lock on track, track it is Jing Yin and Outside track identities, in addition to audio effect processing parameter corresponding with audio sub-track type.For example, volume and gain sub-track include Audio effect processing parameter be output level size adjustment parameter, the audio effect processing parameter that EQ sub-tracks include is EQ processing parameters. Sound effect parameters by changing audio sub-track can adjust the sound that the affiliated audio track of audio sub-track corresponds to output channel Effect.
S206:Data are preserved, or according to audio track and its attribute of sub-track, the life of audio material harmony pixel material attribute Paired audio material corresponds to the control instruction of source file, and plays out control to the source file of audio material according to the control instruction System and acoustic image, audio effect processing control.
Control instruction includes deciding whether to call(Play)The audio source file of audio material, at the beginning of source file plays Between and the end time(To be defined at the time of time shaft), the acoustic image and audio effect processing of source file, specific control instruction and each sound The attribute of frequency track and its attached sub-track, audio material, the attribute of acoustic image material are corresponding.That is audio track is not straight The source file for calling and handling audio material is connect, and simply handles the property file corresponding to the audio source file, passes through editor The property file, addition/editor's acoustic image material and the attribute of audio track and its sub-track for adjusting source file are realized to audio The indirect control of source file.
Such as will enter playlist added to the audio material of audio track, and when the audio track starts broadcasting, the sound Frequency material will be played;By editing audio track attribute, the Jing Yin attribute of audio track can be controlled, the sound can be controlled Whether frequency track and its attached sub-track are Jing Yin(Effectively), lock attribute by editing audio, can control on track except Jing Yin and Addition is hidden outside the individual attributes such as sub-track is outer, and material position and material attribute in other attributes and audio track can not repair Change(Lock-out state).More detailed description refers to narration above.
As shown in Fig. 4 and Figure 12, the performance integrated control method of the present embodiment is also an option that increase video playback control (It is corresponding with following video control modules), specifically include following steps:
S401:Add track of video,(On display interface)Add track of video 4 that is parallel and being aligned in the time shaft (Region), the corresponding controlled plant of the track of video, video server is used in of the invention.
S402:Track of video attribute is edited, it is Jing Yin that editable track of video attribute includes lock on track, track.Video Track attribute is similar with audio track attribute.
S403:Video material is added, adds one or more video materials 41,42,43,44 in track of video, and Video material corresponding with the video material is generated in track of video, the length of the track of video occupied by the video material is with being somebody's turn to do The total duration of video material matches.Before video material is added, video material list first is obtained from video server, then Video material addition is selected from the video material list again and enters track of video.After video material is added to track of video, The video attribute file corresponding with the video material will be generated, integrated control platform is sent by editing video attribute file to control To the instruction of video server, rather than directly invoke or edit source file corresponding to video material, it is ensured that the safety of source file Property and integrated control platform stability.
S404:Edit video material attribute, the video material attribute include start position, final position, the time started, End time, total duration, reproduction time length.Video material attribute is similar with audio material attribute, while audio material also may be used To carry out transverse shifting, cutting and concatenation, or increase adjustment is corresponding with track of video one group on integrated control platform Physical operation key, to manually adjust the attribute of video material by physical operation key.
S405:Data are preserved, or source document is corresponded to video material according to track of video attribute, the attribute generation of video material The control instruction of part, and control and acoustic image, audio effect processing control are played out to the source file of video material according to the control instruction System.Similar with track of video, the attribute of specific control instruction audio track, the attribute of video material are corresponding.
As depicted in figure 5 and figure 12, the performance integrated control method of the present embodiment is also an option that increase signal light control(With under It is corresponding to state lighting control module), specifically include following steps:
S501:Light track is added,(On display interface)Add light track 3 that is parallel and being aligned in the time shaft (Region), the corresponding controlled plant of the light track, light network signal adapter is used in of the invention(Such as Artnet nets Card).
S502:Light track attribute is edited, it is Jing Yin that editable light track attribute includes lock on track, track.Light Track attribute is similar with audio track attribute.
S503:Light material is added, one or more light materials 31,32,33 are added in light track, and in light Light material corresponding with the light material, length and the light of the light track occupied by the light material are generated in track The total duration of material matches.Similar with audio material, video material, light track does not load light material, and is Generation property file corresponding with the light material source file, sends control instruction to control light material source by property file The output of file.
Light material is the light network control data of certain time length, such as Artnet data, Artnet data envelope Equipped with DMX data.Light material can generate in the following manner:After the good light program of conventional lights control platform layout, integrated control platform leads to The light network interface that its light network interface is connected on conventional lights control platform is crossed, records the signal light control letter of lamp control platform output Number, while integrated control platform needs to stamp timing code to the light controling signal recorded in recording process, so as in light rail The enterprising edlin control in road.
S504:Edit light material attribute, the light material attribute include start position, final position, the time started, End time, total duration, reproduction time length.Light material attribute is similar with audio material attribute, while audio material also may be used To carry out transverse shifting, cutting and concatenation, or increase adjustment is corresponding with light track one group on integrated control platform Physical operation key, to manually adjust the attribute of light material by physical operation key.
S505:Data are preserved, or source document is corresponded to light material according to light track attribute, the attribute generation of light material The control instruction of part, and control and acoustic image, audio effect processing control are played out to the source file of video material according to the control instruction System.Similar with track of video, the attribute of specific control instruction audio track, the attribute of video material are corresponding.
As shown in Fig. 6 and Figure 12, the performance integrated control method of the present embodiment is also an option that increase device control(With under It is corresponding to state device control module), specifically include following steps:
S601:Adding set track,(On display interface)Add one or more devices parallel to the time shaft Track 5(Region), the corresponding controlled device of each described device track, such as mechanical device.Needed before adding set track Confirm controlled device and integrated control platform establishes connection.Integrated control platform and controlled device can be established by TCP to be connected, Such as integrated control platform is arranged to TCP server, each controlled device is arranged to TCP Client, the TCP client termination of controlled device The integrated TCP server for controlling platform is actively connected to after entering network.
S602:Editing device track attribute, it is Jing Yin that editable device track attribute includes lock on track, track.Device Track attribute is similar with audio track attribute, if device track selects Jing Yin, the attached control sub-track of whole of device track Any operation is not performed.
S603:Addition control sub-track, add one or more sub- rails of control corresponding with a wherein described device track Road, each control sub-track are parallel(And for)The time shaft, the corresponding described device of each control sub-track Controlled plant corresponding to track is corresponding.
S604:Addition control material, the control material of respective type is added according to the type of control sub-track, and added Generated on the control sub-track added and control sub- material accordingly, the control sub-track length occupied by the control material and the control The total duration of material matches.
Controlling the type of sub-track includes TTL controls sub-track, Control sub-track, network control sub-track, phase Answer, may be added to that the control material of TTL control sub-tracks includes TTL materials 511,512,513(As TTL high level controls element Material, TTL low level control materials), may be added to that relay sub-track control material include relay material 521,522, 523、524(As relay opens control material, relay closes control material), may be added to that the control material of network control sub-track Including network materials 501,502,503(As TCP/IP communication control material, UDP Control on Communication material, 232 Control on Communication materials, 485 protocol communications control material etc.).Sub- material is controlled by addition accordingly, corresponding control instruction can be sent, controls son element Material is substantially exactly control instruction.
S605:The sub- material attribute of editor control, attribute include start position, final position, total duration.Pass through adjustment(It is horizontal To movement)Position of the sub- material in accordingly control sub-track is controlled to change start position and final position, but starting point position Putting the relative position with final position on a timeline will not change, i.e., the length of audio material will not change.Control material Start position be begin to send out with the control material corresponding to control instruction give corresponding controlled device the time shaft moment, end Position is to terminate the time shaft moment for sending control instruction.
Further, incidence relation can also be set between the control material in same control sub-track, makes to be located at The time shaft moment of point position correspondence controls control command corresponding to material to be not carried out success earlier, then will not send(Collection Into control platform)Or do not perform(Controlled device)The time shaft moment corresponding to start position it is later association control material corresponding to control Instruction, such as folding, the elevating control of curtain.
Further, control sub-track control material before and after can be set certain time length guard time, i.e., control son Track can not add control material in guard time or can not send control command.
S606:Data are preserved, or according to control track and its attribute of control sub-track, control the attribute of material to generate control System instruction, and the control instruction is sent to corresponding controlled device.
In addition, the present embodiment also provides a kind of performance integrated control system, as shown in fig. 7, the system includes integrated control platform 70, and select to include audio server 76, video server 77, lighting control module 78 and device control module 79.Wherein, The integrated control platform 70 includes Multi-track editing playback module 71, and the Multi-track editing playback module 71 can perform above-mentioned performance and integrate One or more controls in the control of control method sound intermediate frequency, video control, signal light control and device control are controlled, it is specific real Existing step will not be repeated here.More rail playback editor control modules include audio frequency control module 72, and selection includes regarding Frequency control module 73, lighting control module 74 and device control module 75.
As shown in figure 8,72 pieces of the audio frequency control mould includes audio track add module 81, audio track attributes edit mould Block 82, audio material add module 83, audio material attributes edit module 84, the preservation of audio sub-track add module 85 data/ Audio frequency control instruction module 86 is exported, the function that these modules are realized corresponds with abovementioned steps S201 to S206 respectively, It will not be repeated here, similarly hereinafter.
Further, the audio broadcasting control principle of the performance integrated control system is as shown in figure 13, the integrated control System also includes quick playback editor module, is physically entered module, more rails playback editor module, is used described in quick playback editor module In real-time edition audio material, and send corresponding control instruction and play source document corresponding to audio material to audio server 76 Part, the physical operations key for being physically entered module and corresponding on integrated control platform 71, for the sound to outside input set into control platform Source carries out real-time tuning control.
Accordingly, audio mixing matrix module, track matrix module, 3x1 output mix modules are provided with the audio server With physics output module, the audio mixing matrix module can be received from the quick playback editor module, more rails playback editor's mould Audio source file in the audio server that block is called by control command form is and described and the audio signal that exports The audio signal of module output is physically entered, the similar track matrix module can also receive above-mentioned each road audio input.Institute Audio mixing matrix is stated to be used to after carrying out stereo process to each road audio input export to the output mix module, the track matrix Module is used to after carrying out acoustic image trajectory processing to each road audio input export to the output mix module.The output audio mixing mould Block can be received from audio mixing matrix module, track matrix module and the audio output for being physically entered module, after 3x1 stereo process Each physics output interface output through the physics output module.Wherein, acoustic image trajectory processing refers to according to acoustic image track number It is adjusted according to the level exported to each audio amplifier entity, makes the acoustic image of audio amplifier physical system within the period of setting length Run or remain stationary as along setting path.
In the present embodiment, the source file of audio material is stored on the audio server outside integrated control platform, and more rails return The source file that editor module did not directly invoked and handled audio material is put, and simply handles the category corresponding to the audio source file Property file, pass through and edit the adjustment property file of source file, addition/editor's acoustic image material and audio track and its sub-track Attribute realize to the indirect control of audio source file, therefore output channel corresponding to each audio track export only for Control signal/instruction of audio source file, then perform audio source file by receiving the audio server of the control instruction again Various processing.
As shown in figure 14, more rail playback editor modules receive effective audio material list from audio server 76, Audio source file is not handled directly, and audio source file is stored in the audio server, is receiving corresponding control command After recall audio source file and carry out various audio effect processings, such as stereo process is carried out into audio mixing matrix module, into track Matrix module carries out trajectory processing.Acoustic image material is actually also control command, can both be stored in integrated control platform 71, can also It is uploaded to audio server.
As shown in figure 9, the video control module 73 includes track of video add module 91, track of video attributes edit mould Block 92, video material add module 93, video material attributes edit module 94, preservation data/output video control instruction module 95, the function that these modules are realized corresponds with abovementioned steps S401 to 405 respectively.
Further, the video editing of the performance integrated control system is as shown in figure 15 with playing control principle, described Integrated control platform does not perform the source file of video material directly, but by obtaining video material list and corresponding attribute text Part sends control instruction to video server, and video server connects to perform the source file of video material further according to control instruction and broadcast Put and effect operation.
As shown in Figure 10, the lighting control module 74 includes light track add module 110, light track attributes edit Module 120, light material add module 130, light material attributes edit module 140, preservation data/output signal light control instruction Module 150, the function that these modules are realized correspond with abovementioned steps S501 to 505 respectively.
Further, the signal light control principle of the performance integrated control system is as shown in figure 16, and the integrated control platform is also Module is recorded provided with light signal, for recording the light controling signal of lamp control platform output, and to being recorded in recording process Light controling signal stamp timing code, so as in the control of light track enterprising edlin.
As shown in figure 11, described device control module 75 includes device track add module 151, device track attributes edit Module 152, control sub-track add module 153, control material add module 154, control material attributes edit module 155, guarantor Deposit data/output signal light control instruction module 156, the function that these modules are realized respectively with abovementioned steps S601 to 606 1 One correspondence.
Further, the device control principle of the performance integrated control system is as shown in figure 17, and the integrated control platform is defeated All kinds of device control signals gone out are exported to corresponding controlled plant through each protocol interface on device adapter.
In addition, the integrated control platform can also include being used to make(Generation)Acoustic image track data(That is acoustic image material)Sound As track data generation module, the acoustic image track data obtained through the module is available for more rail playback editor's execution modules to adjust With so as to control audio server track matrix module to be controlled acoustic image track.Further, the present embodiment provides One kind becomes rail acoustic image method for controlling trajectory, and the control method passes through control main frame(Such as integrated control platform, audio server)To entity The output level value of each audio amplifier node of sound box system is configured, and acoustic image is moved in the total duration of setting in the way of setting Or it is static, as shown in figure 18, the control method includes:
S101:Generate acoustic image track data;
S102:In the total duration corresponding to the acoustic image track data, according to acoustic image track data, it is real to adjust each audio amplifier The output level of body;
S103:In the total duration, it will input to the incoming level of each audio amplifier physical signal and corresponding audio amplifier entity Output level is overlapped to obtain the level of each audio amplifier entity reality output.
Acoustic image track data referred within the period of setting length(That is the lasting total duration of acoustic image), in order that integrated control The acoustic image that each virtual audio amplifier node output level is formed in virtual audio amplifier distribution map on platform is run along path set in advance Move or remain stationary as, the output level data that each audio amplifier node changes over time.I.e. acoustic image track data contains audio amplifier distribution Output level delta data of whole audio amplifier nodes in the setting length of time section in map.Come for each audio amplifier node Say, its output level size changes and changed over time in the setting time section, it is also possible to be zero, negative even It is negative infinite, it is preferential using negative infinite.
The audio amplifier entity that each audio amplifier node corresponds in entity sound box system, each audio amplifier entity are included positioned at same One or more audio amplifiers of one opening position.I.e. each audio amplifier node can correspond to one or more co-located audio amplifiers. In order that entity sound box system can accurately reappear acoustic image path, the virtual audio amplifier of each sound in audio amplifier distribution map The position distribution of node should audio amplifier provider location distribution each with entity sound box system it is corresponding, in particular so that each audio amplifier node it Between relative position relation, the relative position relation between each audio amplifier entity is corresponding.
The level of audio amplifier entity reality output is real with the audio amplifier in the level of input signal and above-mentioned acoustic image track data The output level superposition gained of audio amplifier node corresponding to body.The former be input signal characteristic, the latter can be considered as audio amplifier reality The characteristic of body itself.At any one time, different input signals just has different incoming levels, and real for same audio amplifier Body, only an output level.It is, therefore, understood that acoustic image trajectory processing is at the output level to each audio amplifier entity Reason, to form default acoustic image path effect(Including acoustic image transfixion).
Incoming level and the output level superposition of audio amplifier entity can be before audio signal actually enter audio amplifier entity first Being handled, can also be handled again after audio amplifier entity is entered, this link for depending on whole public address system is formed, with And whether audio amplifier entity is built-in with audio-frequency signal processing module, such as DSP unit.
The type of acoustic image track data includes:Pinpoint audio-visual-data, become rail acoustic image track data and variable domain acoustic image track. On integrated control platform during simulation generation acoustic image track data, the speed and process of acoustic image are controlled for convenience, the present invention is real The line segment that is sequentially connected between some acoustic image TRAJECTORY CONTROL points of the example by discrete distribution in audio amplifier distribution map is applied to represent sound The path of running of picture, i.e., determine the path of running of acoustic image, Yi Jisheng by several acoustic image TRAJECTORY CONTROL points of discrete distribution The overall running time of picture.
Acoustic image is pinpointed, is referred within the period of setting length, the one or more audio amplifiers selected in audio amplifier distribution map Node constantly output level, and unselected audio amplifier node output level numerical value is zero or negative infinite situation.Correspondingly, it is fixed Point audio-visual-data, refer within the period of setting length, the one or more audio amplifier nodes selected in audio amplifier distribution map are held Continuous ground output level, and unselected audio amplifier node not output level, or when output level numerical value is zero or negative infinite, Ge Geyin The output level data that case node changes over time.For selected audio amplifier node, its output level is in the setting time Continuously(There may also be upper and lower fluctuating change);And for unselected audio amplifier node, its output level in the setting time Remain negative infinite.
Become rail acoustic image, refer within the period of setting length, in order that acoustic image is run along preset path, each audio amplifier node According to the situation of certain rule output level.Correspondingly, become rail acoustic image track data, refer within the period of setting length, In order that acoustic image is run along preset path, the output level data that each audio amplifier node changes over time.Acoustic image runs path simultaneously Need not be exactly accurate, and acoustic image moves(Run)Duration will not be very long, it is only necessary to substantially builds audience and can recognize that Acoustic image run effect.
Variable domain acoustic image, refer within the period of setting length, in order that acoustic image is run along predeterminable area, each audio amplifier node The situation that changes according to certain rule of output level.Correspondingly, variable domain acoustic image track data referred in the time of setting length, In order that acoustic image is run along predeterminable area, the output level data that each audio amplifier node changes over time.
As shown in figure 19, becoming rail acoustic image track data can obtain by the following method:
S201:Audio amplifier node is set:In audio amplifier distribution map 10, addition or deletion audio amplifier node 11, referring to Figure 20.
S202:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to Road, initialization level, audio amplifier title etc..Audio amplifier node is represented in audio amplifier distribution map with audio amplifier icon, passes through mobile sound Case icon can change its coordinate position.Audio amplifier type refers to full-range cabinet or ultralow frequency audio amplifier, and particular type can be according to reality Need to be divided.Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, each output channel pair One or more sounds at co-located place should be included in an audio amplifier entity in entity sound box system, each audio amplifier entity Case.I.e. each audio amplifier node can correspond to one or more co-located audio amplifiers.In order to reappear in audio amplifier distribution map Designed acoustic image is run path, and the position distribution of audio amplifier entity should be with the position distribution pair of audio amplifier node in audio amplifier distribution map Should.
S203:Divide delta-shaped region:As shown in figure 20, according to the distribution of audio amplifier node, audio amplifier distribution map is divided Multiple delta-shaped regions, three summits of each delta-shaped region are audio amplifier node;Each delta-shaped region is not overlapping, and often Other audio amplifier nodes are not included in individual delta-shaped region, each audio amplifier node and an output channel(Or audio plays dress Put)It is corresponding;
Further, can also aid in determining delta-shaped region, the auxiliary sound box by setting auxiliary sound box node Node does not have corresponding output channel, not output level;
S204:Setting acoustic image TRAJECTORY CONTROL point and path of running:Acoustic image is set to change over time in audio amplifier distribution map Path 12 of running, and several acoustic image TRAJECTORY CONTROL points 14 run positioned at this on path.It can set with the following method Determine acoustic image to run path and acoustic image TRAJECTORY CONTROL point:
1st, fixed point structure:Determine several acoustic image TRAJECTORY CONTROL points successively in audio amplifier distribution map(Coordinate)Position, Several acoustic image TRAJECTORY CONTROL points are in turn connected to form into acoustic image to run path, the acoustic image TRAJECTORY CONTROL point pair of first determination Should at the time of be zero, at the time of follow-up acoustic image TRAJECTORY CONTROL point corresponds to be from determine first acoustic image TRAJECTORY CONTROL point to determine work as The time that preceding acoustic image TRAJECTORY CONTROL point is undergone.Such as can be by clicking on sign(Such as mouse pointer)It is distributed ground in audio amplifier Acoustic image TRAJECTORY CONTROL point is clicked on figure, determines that an acoustic image TRAJECTORY CONTROL point-to-point hits the next acoustic image track control of determination from clicking on System point elapsed time determines the time span between two acoustic image tracing points, and each acoustic image track is finally calculated At the time of corresponding to point;
2nd, dragging generation:Mark is dragged in audio amplifier distribution map(Such as mouse pointer)Along arbitrary line, curve or broken line Motion path so that it is determined that acoustic image is run, during mark is dragged, since initial position, at interval of a period of time Ts all An acoustic image TRAJECTORY CONTROL point can be generated on the path of running.Ts is 108ms in the present embodiment;
S205:Edit acoustic image TRAJECTORY CONTROL point attribute:The attribute of acoustic image TRAJECTORY CONTROL point is sat including acoustic image TRAJECTORY CONTROL point Cursor position, it is corresponding at the time of, to the time needed for next acoustic image TRAJECTORY CONTROL point.Can be to selected acoustic image TRAJECTORY CONTROL point institute Time and acoustic image at the time of corresponding, needed for the selected acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point run path pair One or more of total duration answered is modified.
Assuming that acoustic image TRAJECTORY CONTROL point i is ti at the time of correspondence, acoustic image is run to next track from acoustic image TRAJECTORY CONTROL point i The point i+1 former required times are ti ', and acoustic image total duration corresponding to path of running is t.This means acoustic image is run from initial position The time needed to acoustic image TRAJECTORY CONTROL point i is ti, and the time that acoustic image runs through needed for whole path is t.
It is complete before the acoustic image TRAJECTORY CONTROL point if being modified at the time of to corresponding to a certain acoustic image TRAJECTORY CONTROL point Portion's acoustic image TRAJECTORY CONTROL point each self-corresponding moment, and the run total duration in path of acoustic image are required for being adjusted.If acoustic image TRAJECTORY CONTROL point i is ti at the time of former corresponding, is Ti at the time of correspondence after modification, any sound before acoustic image TRAJECTORY CONTROL point i It is Tj at the time of correspondence after adjustment as being tj at the time of TRAJECTORY CONTROL point J is former corresponding, acoustic image is run former total duration corresponding to path For t, amended total duration is T, then Tj=tj/ti*(Ti-ti), T=t+(Ti-ti).The adjustment mode letter that the present invention uses It is single reasonable, and amount of calculation very little.
It is understood that after time modification corresponding to any acoustic image TRAJECTORY CONTROL point, the time increased or decreased can Whole acoustic image TRAJECTORY CONTROL points before distributing to the acoustic image TRAJECTORY CONTROL point in identical duration ratio(That is aforementioned manner), also may be used With the whole acoustic image TRAJECTORY CONTROL points run by each acoustic image of duration pro rate on path.During using latter approach, it is assumed that sound Be ki as TRAJECTORY CONTROL point i prepares the increased time, then acoustic image TRAJECTORY CONTROL point will be modified at the time of correspondence Ti= (ki*ti/t)+ ti, i.e. time ki are not all to distinguish dispensing acoustic image TRAJECTORY CONTROL point, and each acoustic image TRAJECTORY CONTROL point is all Portion of time can be distributed in its ratio with path total duration of running.
If the time needed for a certain acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point is adjusted, then next At the time of corresponding to acoustic image TRAJECTORY CONTROL point, and the run total duration in path of acoustic image is required for being adjusted.If acoustic image track Control point i is ti at the time of former corresponding, is Ti at the time of correspondence after modification, acoustic image from current acoustic image TRAJECTORY CONTROL point i run to It is Ti ' times required after ti ' is changed that next tracing point i+1 former required times, which are, when acoustic image runs former total corresponding to path A length of t, amended total duration are T, then Ti+1=Ti+Ti ', T=t+(Ti-ti)+(Ti’-ti’).
If modification acoustic image is run total duration corresponding to path, then the acoustic image is run each acoustic image TRAJECTORY CONTROL on path It will be all adjusted at the time of corresponding to point and its to the time needed for next acoustic image TRAJECTORY CONTROL point.If acoustic image TRAJECTORY CONTROL point I is ti at the time of former corresponding, is Ti at the time of correspondence after adjustment, acoustic image is run to next rail from current acoustic image TRAJECTORY CONTROL point i It is Ti ' the required times after ti ' adjustment that the mark point i+1 former required times, which be, and acoustic image former total duration corresponding to path of running is t, Amended total duration is T, then Ti=ti/t* (T-t)+ti, Ti '=ti '/t*(T-t)+ti’.
S206:Record becomes rail acoustic image track data:Each audio amplifier node is recorded to run process along setting path of running in acoustic image In each moment output level numerical value.
For becoming for rail acoustic image, the output electricity of the related audio amplifier node for generating acoustic image can be calculated by the following method Level values.As shown in figure 21, it is assumed that acoustic image tracing point i(It is not necessarily acoustic image TRAJECTORY CONTROL point)Enclosed positioned at by three audio amplifier nodes In the delta-shaped region formed, acoustic image tracing point i is ti at the time of correspondence, and now the three of vertex position audio amplifier node will be defeated Go out a certain size level, the output level value of other audio amplifier nodes in audio amplifier distribution map beyond these three audio amplifier nodes It is zero or negative infinite, so as to ensure that the acoustic image at ti moment in audio amplifier distribution map is located at above-mentioned acoustic image tracing point i.For this three The audio amplifier node A of any apex of angular domain, this moment ti output level are dBA1=10*lg(LA’/LA), wherein LA’ For remaining two straight distance of summit institute structure of the acoustic image tracing point to the delta-shaped region, LAFor the audio amplifier node A to remaining The two straight distances of summit institute structure;
Further, each audio amplifier node can also set initialization level value.Assuming that above-mentioned audio amplifier node A initialization Level is dBA,So in above-mentioned moment ti, audio amplifier node A1 output level dBA1’=dBA+10*lg(LA’/LA).Remaining audio amplifier Node set initialization level after t output level by that analogy.
Further, as shown in figure 20, if any part acoustic image tracing point(Or acoustic image is run path)Any one is not fallen within In the delta-shaped region be made up of three audio amplifier nodes(Movement locus end), then auxiliary sound box node 13 can be set to set New delta-shaped region, to ensure that whole acoustic image tracing points are each fallen within corresponding delta-shaped region, the auxiliary sound box node Without corresponding output channel, output level, is not only used for auxiliary and determines delta-shaped region;
Further, when recording the output level value of each audio amplifier node, can continuously record, can also be according to certain Frequency records.For the latter, refer to the output level numerical value that each audio amplifier node is recorded once at interval of certain time.In this reality Apply in example, acoustic image is recorded in along the output for setting each audio amplifier node when path is run using the frequency of 25 frames/second or 30 frames/second Level value.The output level data of each audio amplifier node are recorded by certain frequency, it is possible to reduce data volume, accelerate to inputting audio Signal carries out processing speed during acoustic image trajectory processing, ensures that acoustic image is run the real-time of effect.
As shown in figure 22, variable domain acoustic image track data can obtain by the following method:
S501:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node.
S502:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to Road, initialization level, audio amplifier title etc..Audio amplifier node is represented in audio amplifier distribution map with audio amplifier icon, passes through mobile sound Case icon can change its coordinate position.Audio amplifier type refers to full-range cabinet or ultralow frequency audio amplifier, and particular type can be according to reality Need to be divided.Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, each output channel pair One or more sounds at co-located place should be included in an audio amplifier entity in entity sound box system, each audio amplifier entity Case.I.e. each audio amplifier node can correspond to one or more co-located audio amplifiers.In order to reappear in audio amplifier distribution map Designed acoustic image is run path, and the position distribution of audio amplifier entity should be with the position distribution pair of audio amplifier node in audio amplifier distribution map Should.
S503:Setting acoustic image, which is run, path and divides acoustic image region:Multiple acoustic image regions are set in audio amplifier distribution map, Each acoustic image region includes several audio amplifier nodes, and sets the path of running for traveling through each acoustic image region.I.e. by acoustic image area Domain is considered as one " acoustic image point ", and acoustic image is run to another region from a region, until running through whole acoustic image regions successively.Can , can also be quick to set in the following manner with any acoustic image region for setting each complementary overhangs in audio amplifier distribution map Acoustic image region:
Set straight line acoustic image to run path in audio amplifier distribution map, and several acoustic image areas are set along acoustic image path of running Domain, the border in each acoustic image region are approximately perpendicular to the direction of running of the acoustic image.These acoustic image regions can be arranged side by side, and also may be used To be arranged at intervals, but in order to ensure to give birth to acoustic image movement(Run)Continuity, mode is arranged side by side in prioritizing selection.These acoustic image areas The gross area in domain is less than or equal to the area of whole audio amplifier distribution map.When dividing acoustic image region, wide division can be used, Not wide division can be used.
, can be by dragging sign during concrete operations(Such as mouse pointer)To set acoustic image to run path and division simultaneously Acoustic image region.Specifically:Dragging sign is moved to end in audio amplifier distribution map from a certain start position along some direction Point position, while according to impartial several acoustic image regions of division of air line distance of the start position to the final position, Ge Gesheng As the border in region is perpendicular to the straight line of the start position to the final position, and the width in each acoustic image region is impartial.Sound As running total duration the time that middle final position undergone is moved to drag sign from original position.
Assuming that air line distance of the sign from start position to final position is R, total duration used is t, and equalization is drawn The quantity for dividing acoustic image region is n, then the n acoustic image region that width is R/n will be automatically generated, and corresponding to each acoustic image region At the time of be t/n.
S504:Edit acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region is to next The time required to acoustic image region and acoustic image is run total duration.The editor of acoustic image area attribute compiles with becoming rail acoustic image tracing point attribute Collect similar.If being modified at the time of to corresponding to a certain acoustic image region, whole acoustic image regions before the sound area domain are each At the time of corresponding, and the total duration that acoustic image is run is required for being adjusted.If to a certain acoustic image region to next acoustic image region The required time is adjusted, then at the time of corresponding to next acoustic image region, and acoustic image total duration of running is required for carrying out Adjustment.If modification acoustic image is run total duration, then at the time of corresponding to each acoustic image region that the acoustic image is run on path and its It will be all adjusted to the time needed for next acoustic image region.
S505:Variable domain acoustic image track data is recorded, each audio amplifier node is recorded and is run successively in acoustic image along path of running is set During each acoustic image region, the output level numerical value at each moment.
For variable domain acoustic image, the output electricity of the related audio amplifier node for generating acoustic image can be calculated by the following method Level values.
As shown in figure 23, it is assumed that the acoustic image of a certain variable domain track total duration of running is t, is divided into 4 equal widths altogether Acoustic image region, acoustic image of the acoustic image along straight line run path from some acoustic image region 1(Acoustic image region i)To next acoustic image region 2(Acoustic image region i+1)Mobile, the run midpoint of line segment that path is located in acoustic image region 1 of acoustic image is acoustic image TRAJECTORY CONTROL point 1 (Acoustic image TRAJECTORY CONTROL point i), the run midpoint of line segment that path is located in acoustic image region 2 of acoustic image is acoustic image TRAJECTORY CONTROL point 2(Sound As TRAJECTORY CONTROL point i+1).During acoustic image tracing point P runs to next acoustic image region 2 from current acoustic image region 1, acoustic image The output level of each audio amplifier node is domain 1dB in region 1(Domain dBi), the output level of each audio amplifier node in acoustic image region 2 For domain 2dB(Domain dBi+1), the audio amplifier node output level beyond the two acoustic image regions is zero or negative infinite.
Domain 1dB values=10logeη÷2.3025851
Domain 2dB values=10logeβ÷2.3025851
Wherein, l12The distance of acoustic image TRAJECTORY CONTROL point 2, l are arrived for acoustic image TRAJECTORY CONTROL point 11PFor acoustic image TRAJECTORY CONTROL point 1 To acoustic image tracing point P distance, lp2Distance for current acoustic image tracing point P to acoustic image TRAJECTORY CONTROL point 2.Can be with from above-mentioned formula Find out that each acoustic image tracing point there are two acoustic image region output levels, but when acoustic image tracing point is located at the control of each acoustic image track During system point, only one of which acoustic image region output level, such as when acoustic image tracing point P moves to acoustic image TRAJECTORY CONTROL point 2, this When there was only the output level of acoustic image region 2, and the output level in acoustic image region 1 is zero.
When recording the output level value of each audio amplifier node in variable domain acoustic image track, can continuously record, can also be according to one Fixed frequency records.For the latter, refer to the output level numerical value that each audio amplifier node is recorded once at interval of certain time. In the present embodiment, each audio amplifier node when acoustic image edge setting path is run is recorded in using the frequency of 25 frames/second or 30 frames/second Output level value.The output level data of each audio amplifier node are recorded by certain frequency, it is possible to reduce data volume, accelerate to input Audio signal carries out processing speed during acoustic image trajectory processing, ensures that acoustic image is run the real-time of effect.
As shown in figure 24, fixed point acoustic image track data can obtain by the following method:
S701:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node.
S702:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to Road, initialization level, audio amplifier title etc..
S703:Acoustic image tracing point and total duration are set, one or more audio amplifier nodes, institute are selected in audio amplifier distribution map Selected each audio amplifier node sets acoustic image tracing point in each audio amplifier node residence time as acoustic image tracing point.
S704:Record fixed point acoustic image track data:Record the output at each audio amplifier node each moment in above-mentioned total duration Level numerical value.
In addition, the acoustic image track data of the present invention also includes audio amplifier link data.Audio amplifier link refers to hold audio amplifier node Row is operation associated, when associating the active audio amplifier node output level in audio amplifier node, associates the passive sound box section of audio amplifier node Put automatic output level.Audio amplifier link data are passive sound boxes after being associated operation to several selected audio amplifier nodes Node relative to active audio amplifier node output level difference.For being necessary the audio amplifier node of link association in spatial distribution Distance can relatively.
As shown in figure 25, audio amplifier link data can obtain by the following method:
S801:Audio amplifier node is set:In audio amplifier distribution map, addition or deletion audio amplifier node.
S802:Change audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier type, corresponds to export and lead to Road, initialization level, audio amplifier title etc..
S803:Audio amplifier node link relation is set:Selected ultralow frequency audio amplifier node is connected to neighbouring multiple full ranges Audio amplifier node;
S804:Record audio amplifier link data:Calculate and record the output level DerivedTrim of the ultralow frequency audio amplifier, The output level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10(Trim-i+LinkTrim-i)/10, wherein Trim-i is the output level value of any full-range cabinet node i itself, and LinkTrim-i is the full-range cabinet node i Original setting and the level that links of the ultralow frequency audio amplifier, DeriveddB are the initialization level value of the ultralow frequency audio amplifier node, DerivedTrim is the output level value that the ultralow frequency audio amplifier node sets are linked to after some full-range cabinet nodes. One ultralow frequency audio amplifier node may be configured as linking to one or more full-range cabinet nodes, after link, when full-range cabinet node Output level, then the ultralow frequency audio amplifier node linked with it will automatic output level, to coordinate the battalion of full-range cabinet node Make certain sound effect.For a ultralow frequency audio amplifier node link to a full-range cabinet node, only it need to consider both Distance, source of sound property and required audio etc., ultralow frequency audio amplifier node can be set and follow the full-range cabinet node to broadcast automatically Output level when putting, that is, link level.
As shown in figure 26, it is assumed that the ultralow frequency audio amplifier node 4 in audio amplifier distribution map links to 3 neighbouring full-range cabinets Node, itself output level value of full-range cabinet node 21,22,23 are respectively Trim1, Trim2 and Trim3, ultralow frequency audio amplifier Node 24 originally with each full-range cabinet point 21,22,23 link level value be respectively LinkTrim1, LinkTrim2 with LinkTrim3.If it is Ratio that level, which is totally added ratio, ultralow frequency audio amplifier node 4 itself initialization level value is DeriveddB, the last output level value of ultralow frequency audio amplifier node 4 is DerivedTrim, then has:
Ratio=10(Trim1+LinkTrim1)/10+10(Trim2+LinkTrim2)/10+10(Trim3+LinkTrim3)/10
DerivedTrim=10*log (Ratio)+DeriveddB
When Ratio is more than 1, ultralow frequency audio amplifier node 24 is linked to gained output level after these three full-range cabinet nodes For 0, i.e., its final output level value is initialization level value.

Claims (5)

  1. A kind of 1. lamp light control method, it is characterised in that including:
    Time shaft is shown on the display interface of integrated control platform;
    The track for being controlled to corresponding performing device is added and/or deletes, the track includes light track;
    Editing rail attribute;
    Add material;
    Edit material attribute;
    Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute;
    Described add includes for the track being controlled to corresponding performing device:
    Add light track that is parallel and being aligned in the time shaft, the corresponding controlled plant of the light track;
    The editing rail attribute includes:The light track attribute is edited, editable light track attribute includes track locks Calmly, track is Jing Yin, and the Jing Yin attribute of track is used to control whether the light track comes into force, and the lock on track is used to lock The light track.
  2. 2. lamp light control method according to claim 1, it is characterised in that when adding light material, light can be added Material generates light material icon corresponding with the light material to the light track in light track, light element The length and the total duration of the light material of light track occupied by material icon match.
  3. 3. lamp light control method according to claim 2, it is characterised in that, can be to the lamp when editing material attribute The attribute of light material enters edlin, the light material attribute include start position, final position, the time started, the end time, Total duration, reproduction time length.
  4. 4. lamp light control method according to claim 1, it is characterised in that the generation method of the light material is:Institute State integrated control platform and record the light controling signal of lamp control platform output, and the light controling signal recorded is beaten in recording process Upper timing code is so as to form the light material, timing code and end when the total duration of the light material is recording start signal Only timing code difference during recording signal.
  5. 5. lamp light control method according to claim 1, it is characterised in that it is right that the control instruction includes playing material institute Answer the instruction of source file and the instruction of effect process is carried out to the source file corresponding to material.
CN201310754824.3A 2013-12-31 2013-12-31 Lamp light control method Active CN104750059B8 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310754824.3A CN104750059B8 (en) 2013-12-31 2013-12-31 Lamp light control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310754824.3A CN104750059B8 (en) 2013-12-31 2013-12-31 Lamp light control method

Publications (3)

Publication Number Publication Date
CN104750059A CN104750059A (en) 2015-07-01
CN104750059B true CN104750059B (en) 2017-11-17
CN104750059B8 CN104750059B8 (en) 2018-01-19

Family

ID=53589918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310754824.3A Active CN104750059B8 (en) 2013-12-31 2013-12-31 Lamp light control method

Country Status (1)

Country Link
CN (1) CN104750059B8 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937023B (en) * 2015-12-31 2019-12-13 上海励丰创意展示有限公司 multi-professional collaborative editing and control method for film, television and stage
CN106937021B (en) * 2015-12-31 2019-12-13 上海励丰创意展示有限公司 performance integrated control method based on time axis multi-track playback technology
CN106937022B (en) * 2015-12-31 2019-12-13 上海励丰创意展示有限公司 multi-professional collaborative editing and control method for audio, video, light and machinery
CN106547249B (en) * 2016-10-14 2019-03-01 广州励丰文化科技股份有限公司 A kind of mechanical arm console that speech detection is combined with local media and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004119334A (en) * 2002-09-30 2004-04-15 Matsushita Electric Works Ltd Light control data processing device
CN101494933A (en) * 2008-09-27 2009-07-29 嘉力时(集团)有限公司 Method for recording DMX signal
CN101655988A (en) * 2008-08-19 2010-02-24 北京理工大学 System for three-dimensional interactive virtual arrangement of large-scale artistic performance
CN101893886A (en) * 2010-07-27 2010-11-24 北京水晶石数字科技有限公司 Rehearsal and performance control system
CN101916095A (en) * 2010-07-27 2010-12-15 北京水晶石数字科技有限公司 Rehearsal performance control method
CN102722399A (en) * 2011-03-29 2012-10-10 童玲 Simulation system and method for theatre stage blocking based on object technology and multimedia technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004119334A (en) * 2002-09-30 2004-04-15 Matsushita Electric Works Ltd Light control data processing device
CN101655988A (en) * 2008-08-19 2010-02-24 北京理工大学 System for three-dimensional interactive virtual arrangement of large-scale artistic performance
CN101494933A (en) * 2008-09-27 2009-07-29 嘉力时(集团)有限公司 Method for recording DMX signal
CN101893886A (en) * 2010-07-27 2010-11-24 北京水晶石数字科技有限公司 Rehearsal and performance control system
CN101916095A (en) * 2010-07-27 2010-12-15 北京水晶石数字科技有限公司 Rehearsal performance control method
CN102722399A (en) * 2011-03-29 2012-10-10 童玲 Simulation system and method for theatre stage blocking based on object technology and multimedia technology

Also Published As

Publication number Publication date
CN104750059A (en) 2015-07-01
CN104750059B8 (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN104754178B (en) audio control method
CN104754186B (en) Apparatus control method
CN106937022A (en) Audio, video, light, mechanical multi-specialized collaborative editing and control method
CN104750059B (en) Lamp light control method
US9142259B2 (en) Editing device, editing method, and program
CN104750058B (en) Panorama multi-channel audio control method
US10541003B2 (en) Performance content synchronization based on audio
CN110139122A (en) System and method for media distribution and management
US9952739B2 (en) Modular audio control surface
WO2018076174A1 (en) Multimedia editing method and device, and smart terminal
CN104754244B (en) Panorama multi-channel audio control method based on variable domain audio-visual effects
CN104754243B (en) Panorama multi-channel audio control method based on the control of variable domain acoustic image
CN104750051B (en) Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image
CN104754242B (en) Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image
CN106937021A (en) Performance integrated control method based on many rail playback technologies of time shaft
CN106937023A (en) Towards video display, the multi-specialized collaborative editing of stage and control method
CN104751869B (en) Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image
CN104750055B (en) Based on the panorama multi-channel audio control method for becoming rail audio-visual effects
CN104754241B (en) Panorama multi-channel audio control method based on variable domain acoustic image
CN106937204B (en) Panorama multichannel sound effect method for controlling trajectory
CN106937205B (en) Complicated sound effect method for controlling trajectory towards video display, stage
CN104754447B (en) Based on the link sound effect control method for becoming rail acoustic image
CN106851331A (en) Easily broadcast processing method and system
CN104754449B (en) Sound effect control method based on variable domain acoustic image
CN104754451B (en) Pinpoint acoustic image method for controlling trajectory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CI03 Correction of invention patent

Correction item: Second inventor

Correct: Li Zhixiong

False: The inventor has waived the right to be mentioned

Number: 46-02

Page: The title page

Volume: 33

Correction item: Second inventor

Correct: Li Zhixiong

Correct: Li Zhixiong

False: The inventor has waived the right to be mentioned

Number: 46-02

Volume: 33

CI03 Correction of invention patent